00:00:00.000 Started by upstream project "autotest-per-patch" build number 132056 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.097 The recommended git tool is: git 00:00:00.097 using credential 00000000-0000-0000-0000-000000000002 00:00:00.099 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.148 Fetching changes from the remote Git repository 00:00:00.150 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.195 Using shallow fetch with depth 1 00:00:00.195 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.195 > git --version # timeout=10 00:00:00.225 > git --version # 'git version 2.39.2' 00:00:00.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.247 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.247 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.019 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.029 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.040 Checking out Revision 71582ff3be096f9d5ed302be37c05572278bd285 (FETCH_HEAD) 00:00:06.040 > git config core.sparsecheckout # timeout=10 00:00:06.048 > git read-tree -mu HEAD # timeout=10 00:00:06.062 > git checkout -f 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=5 00:00:06.077 Commit message: "jenkins/jjb-config: Add SPDK_TEST_NVME_INTERRUPT to nvme-phy job" 00:00:06.077 > git rev-list --no-walk 71582ff3be096f9d5ed302be37c05572278bd285 # timeout=10 00:00:06.156 [Pipeline] Start of Pipeline 00:00:06.170 [Pipeline] library 00:00:06.172 Loading library shm_lib@master 00:00:08.213 Library shm_lib@master is cached. Copying from home. 00:00:08.252 [Pipeline] node 00:00:08.431 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:08.433 [Pipeline] { 00:00:08.448 [Pipeline] catchError 00:00:08.450 [Pipeline] { 00:00:08.466 [Pipeline] wrap 00:00:08.476 [Pipeline] { 00:00:08.485 [Pipeline] stage 00:00:08.486 [Pipeline] { (Prologue) 00:00:08.499 [Pipeline] echo 00:00:08.500 Node: VM-host-SM9 00:00:08.505 [Pipeline] cleanWs 00:00:08.515 [WS-CLEANUP] Deleting project workspace... 00:00:08.515 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.521 [WS-CLEANUP] done 00:00:08.732 [Pipeline] setCustomBuildProperty 00:00:08.795 [Pipeline] httpRequest 00:00:11.509 [Pipeline] echo 00:00:11.511 Sorcerer 10.211.164.101 is alive 00:00:11.520 [Pipeline] retry 00:00:11.521 [Pipeline] { 00:00:11.535 [Pipeline] httpRequest 00:00:11.540 HttpMethod: GET 00:00:11.540 URL: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:11.542 Sending request to url: http://10.211.164.101/packages/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:11.558 Response Code: HTTP/1.1 200 OK 00:00:11.558 Success: Status code 200 is in the accepted range: 200,404 00:00:11.559 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:19.348 [Pipeline] } 00:00:19.367 [Pipeline] // retry 00:00:19.376 [Pipeline] sh 00:00:19.659 + tar --no-same-owner -xf jbp_71582ff3be096f9d5ed302be37c05572278bd285.tar.gz 00:00:19.677 [Pipeline] httpRequest 00:00:20.075 [Pipeline] echo 00:00:20.077 Sorcerer 10.211.164.101 is alive 00:00:20.088 [Pipeline] retry 00:00:20.091 [Pipeline] { 00:00:20.107 [Pipeline] httpRequest 00:00:20.111 HttpMethod: GET 00:00:20.112 URL: http://10.211.164.101/packages/spdk_6b98809f9a920fb249c3bdf072746a07851f1d17.tar.gz 00:00:20.113 Sending request to url: http://10.211.164.101/packages/spdk_6b98809f9a920fb249c3bdf072746a07851f1d17.tar.gz 00:00:20.134 Response Code: HTTP/1.1 200 OK 00:00:20.134 Success: Status code 200 is in the accepted range: 200,404 00:00:20.135 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_6b98809f9a920fb249c3bdf072746a07851f1d17.tar.gz 00:00:49.043 [Pipeline] } 00:00:49.062 [Pipeline] // retry 00:00:49.071 [Pipeline] sh 00:00:49.352 + tar --no-same-owner -xf spdk_6b98809f9a920fb249c3bdf072746a07851f1d17.tar.gz 00:00:51.900 [Pipeline] sh 00:00:52.181 + git -C spdk log --oneline -n5 00:00:52.182 6b98809f9 test/scheduler: Read PID's status file only once 00:00:52.182 8810a5ddc test/scheduler: Account for multiple cpus in the affinity mask 00:00:52.182 f220d590c nvmf: rename passthrough_nsid -> passthru_nsid 00:00:52.182 1a1586409 nvmf: use bdev's nsid for admin command passthru 00:00:52.182 892c29f49 nvmf: pass nsid to nvmf_ctrlr_identify_ns() 00:00:52.200 [Pipeline] writeFile 00:00:52.215 [Pipeline] sh 00:00:52.497 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:52.509 [Pipeline] sh 00:00:52.789 + cat autorun-spdk.conf 00:00:52.789 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.789 SPDK_TEST_NVMF=1 00:00:52.789 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.789 SPDK_TEST_URING=1 00:00:52.789 SPDK_TEST_USDT=1 00:00:52.789 SPDK_RUN_UBSAN=1 00:00:52.789 NET_TYPE=virt 00:00:52.789 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:52.797 RUN_NIGHTLY=0 00:00:52.798 [Pipeline] } 00:00:52.811 [Pipeline] // stage 00:00:52.825 [Pipeline] stage 00:00:52.828 [Pipeline] { (Run VM) 00:00:52.839 [Pipeline] sh 00:00:53.120 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:53.120 + echo 'Start stage prepare_nvme.sh' 00:00:53.120 Start stage prepare_nvme.sh 00:00:53.120 + [[ -n 3 ]] 00:00:53.120 + disk_prefix=ex3 00:00:53.120 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:00:53.120 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:00:53.120 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:00:53.120 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.120 ++ SPDK_TEST_NVMF=1 00:00:53.120 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:53.120 ++ SPDK_TEST_URING=1 00:00:53.120 ++ SPDK_TEST_USDT=1 00:00:53.120 ++ SPDK_RUN_UBSAN=1 00:00:53.120 ++ NET_TYPE=virt 00:00:53.120 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:53.120 ++ RUN_NIGHTLY=0 00:00:53.120 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:53.120 + nvme_files=() 00:00:53.120 + declare -A nvme_files 00:00:53.120 + backend_dir=/var/lib/libvirt/images/backends 00:00:53.120 + nvme_files['nvme.img']=5G 00:00:53.120 + nvme_files['nvme-cmb.img']=5G 00:00:53.120 + nvme_files['nvme-multi0.img']=4G 00:00:53.120 + nvme_files['nvme-multi1.img']=4G 00:00:53.120 + nvme_files['nvme-multi2.img']=4G 00:00:53.120 + nvme_files['nvme-openstack.img']=8G 00:00:53.121 + nvme_files['nvme-zns.img']=5G 00:00:53.121 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:53.121 + (( SPDK_TEST_FTL == 1 )) 00:00:53.121 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:53.121 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:53.121 + for nvme in "${!nvme_files[@]}" 00:00:53.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:00:53.121 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.121 + for nvme in "${!nvme_files[@]}" 00:00:53.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:00:53.121 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.121 + for nvme in "${!nvme_files[@]}" 00:00:53.121 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:00:53.380 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:53.380 + for nvme in "${!nvme_files[@]}" 00:00:53.380 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:00:53.380 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.380 + for nvme in "${!nvme_files[@]}" 00:00:53.380 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:00:53.380 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.380 + for nvme in "${!nvme_files[@]}" 00:00:53.380 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:00:53.380 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:53.380 + for nvme in "${!nvme_files[@]}" 00:00:53.380 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:00:53.639 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:53.639 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:00:53.639 + echo 'End stage prepare_nvme.sh' 00:00:53.639 End stage prepare_nvme.sh 00:00:53.651 [Pipeline] sh 00:00:53.931 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:53.931 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:00:54.189 00:00:54.189 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:00:54.189 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:00:54.189 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:54.189 HELP=0 00:00:54.189 DRY_RUN=0 00:00:54.189 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:00:54.189 NVME_DISKS_TYPE=nvme,nvme, 00:00:54.189 NVME_AUTO_CREATE=0 00:00:54.189 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:00:54.189 NVME_CMB=,, 00:00:54.189 NVME_PMR=,, 00:00:54.189 NVME_ZNS=,, 00:00:54.189 NVME_MS=,, 00:00:54.189 NVME_FDP=,, 00:00:54.189 SPDK_VAGRANT_DISTRO=fedora39 00:00:54.189 SPDK_VAGRANT_VMCPU=10 00:00:54.189 SPDK_VAGRANT_VMRAM=12288 00:00:54.189 SPDK_VAGRANT_PROVIDER=libvirt 00:00:54.189 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:54.189 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:54.189 SPDK_OPENSTACK_NETWORK=0 00:00:54.189 VAGRANT_PACKAGE_BOX=0 00:00:54.189 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:00:54.189 FORCE_DISTRO=true 00:00:54.189 VAGRANT_BOX_VERSION= 00:00:54.189 EXTRA_VAGRANTFILES= 00:00:54.189 NIC_MODEL=e1000 00:00:54.189 00:00:54.189 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:00:54.189 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:57.477 Bringing machine 'default' up with 'libvirt' provider... 00:00:57.754 ==> default: Creating image (snapshot of base box volume). 00:00:57.754 ==> default: Creating domain with the following settings... 00:00:57.754 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730798623_aa69abffc057d19a8916 00:00:57.754 ==> default: -- Domain type: kvm 00:00:57.754 ==> default: -- Cpus: 10 00:00:57.754 ==> default: -- Feature: acpi 00:00:57.754 ==> default: -- Feature: apic 00:00:57.754 ==> default: -- Feature: pae 00:00:57.754 ==> default: -- Memory: 12288M 00:00:57.754 ==> default: -- Memory Backing: hugepages: 00:00:57.754 ==> default: -- Management MAC: 00:00:57.754 ==> default: -- Loader: 00:00:57.754 ==> default: -- Nvram: 00:00:57.754 ==> default: -- Base box: spdk/fedora39 00:00:57.754 ==> default: -- Storage pool: default 00:00:57.754 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730798623_aa69abffc057d19a8916.img (20G) 00:00:57.754 ==> default: -- Volume Cache: default 00:00:57.754 ==> default: -- Kernel: 00:00:57.754 ==> default: -- Initrd: 00:00:57.754 ==> default: -- Graphics Type: vnc 00:00:57.754 ==> default: -- Graphics Port: -1 00:00:57.754 ==> default: -- Graphics IP: 127.0.0.1 00:00:57.754 ==> default: -- Graphics Password: Not defined 00:00:57.754 ==> default: -- Video Type: cirrus 00:00:57.754 ==> default: -- Video VRAM: 9216 00:00:57.754 ==> default: -- Sound Type: 00:00:57.754 ==> default: -- Keymap: en-us 00:00:57.754 ==> default: -- TPM Path: 00:00:57.754 ==> default: -- INPUT: type=mouse, bus=ps2 00:00:57.754 ==> default: -- Command line args: 00:00:57.754 ==> default: -> value=-device, 00:00:57.754 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:00:57.754 ==> default: -> value=-drive, 00:00:57.754 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:00:57.754 ==> default: -> value=-device, 00:00:57.754 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.754 ==> default: -> value=-device, 00:00:57.754 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:00:57.754 ==> default: -> value=-drive, 00:00:57.754 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:00:57.754 ==> default: -> value=-device, 00:00:57.754 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.754 ==> default: -> value=-drive, 00:00:57.754 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:00:57.754 ==> default: -> value=-device, 00:00:57.754 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.754 ==> default: -> value=-drive, 00:00:57.754 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:00:57.754 ==> default: -> value=-device, 00:00:57.754 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:00:57.755 ==> default: Creating shared folders metadata... 00:00:57.755 ==> default: Starting domain. 00:00:59.148 ==> default: Waiting for domain to get an IP address... 00:01:17.238 ==> default: Waiting for SSH to become available... 00:01:18.174 ==> default: Configuring and enabling network interfaces... 00:01:22.366 default: SSH address: 192.168.121.135:22 00:01:22.366 default: SSH username: vagrant 00:01:22.366 default: SSH auth method: private key 00:01:24.901 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:33.021 ==> default: Mounting SSHFS shared folder... 00:01:33.957 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:33.957 ==> default: Checking Mount.. 00:01:34.968 ==> default: Folder Successfully Mounted! 00:01:34.968 ==> default: Running provisioner: file... 00:01:35.904 default: ~/.gitconfig => .gitconfig 00:01:36.163 00:01:36.163 SUCCESS! 00:01:36.163 00:01:36.163 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:36.163 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:36.163 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:36.163 00:01:36.173 [Pipeline] } 00:01:36.188 [Pipeline] // stage 00:01:36.197 [Pipeline] dir 00:01:36.198 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:01:36.200 [Pipeline] { 00:01:36.214 [Pipeline] catchError 00:01:36.215 [Pipeline] { 00:01:36.227 [Pipeline] sh 00:01:36.507 + vagrant ssh-config --host vagrant 00:01:36.507 + sed -ne /^Host/,$p 00:01:36.507 + tee ssh_conf 00:01:39.795 Host vagrant 00:01:39.795 HostName 192.168.121.135 00:01:39.795 User vagrant 00:01:39.795 Port 22 00:01:39.795 UserKnownHostsFile /dev/null 00:01:39.795 StrictHostKeyChecking no 00:01:39.795 PasswordAuthentication no 00:01:39.795 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:39.795 IdentitiesOnly yes 00:01:39.795 LogLevel FATAL 00:01:39.795 ForwardAgent yes 00:01:39.795 ForwardX11 yes 00:01:39.795 00:01:39.810 [Pipeline] withEnv 00:01:39.812 [Pipeline] { 00:01:39.826 [Pipeline] sh 00:01:40.105 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:40.105 source /etc/os-release 00:01:40.105 [[ -e /image.version ]] && img=$(< /image.version) 00:01:40.105 # Minimal, systemd-like check. 00:01:40.105 if [[ -e /.dockerenv ]]; then 00:01:40.105 # Clear garbage from the node's name: 00:01:40.105 # agt-er_autotest_547-896 -> autotest_547-896 00:01:40.105 # $HOSTNAME is the actual container id 00:01:40.105 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:40.105 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:40.105 # We can assume this is a mount from a host where container is running, 00:01:40.105 # so fetch its hostname to easily identify the target swarm worker. 00:01:40.105 container="$(< /etc/hostname) ($agent)" 00:01:40.105 else 00:01:40.105 # Fallback 00:01:40.105 container=$agent 00:01:40.105 fi 00:01:40.105 fi 00:01:40.105 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:40.105 00:01:40.375 [Pipeline] } 00:01:40.391 [Pipeline] // withEnv 00:01:40.400 [Pipeline] setCustomBuildProperty 00:01:40.415 [Pipeline] stage 00:01:40.417 [Pipeline] { (Tests) 00:01:40.435 [Pipeline] sh 00:01:40.714 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:40.987 [Pipeline] sh 00:01:41.268 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:41.542 [Pipeline] timeout 00:01:41.542 Timeout set to expire in 1 hr 0 min 00:01:41.544 [Pipeline] { 00:01:41.558 [Pipeline] sh 00:01:41.841 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:42.407 HEAD is now at 6b98809f9 test/scheduler: Read PID's status file only once 00:01:42.419 [Pipeline] sh 00:01:42.698 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:42.971 [Pipeline] sh 00:01:43.253 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:43.530 [Pipeline] sh 00:01:43.824 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:01:44.089 ++ readlink -f spdk_repo 00:01:44.089 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:44.089 + [[ -n /home/vagrant/spdk_repo ]] 00:01:44.089 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:44.089 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:44.089 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:44.089 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:44.089 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:44.089 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:01:44.089 + cd /home/vagrant/spdk_repo 00:01:44.089 + source /etc/os-release 00:01:44.089 ++ NAME='Fedora Linux' 00:01:44.089 ++ VERSION='39 (Cloud Edition)' 00:01:44.089 ++ ID=fedora 00:01:44.089 ++ VERSION_ID=39 00:01:44.089 ++ VERSION_CODENAME= 00:01:44.089 ++ PLATFORM_ID=platform:f39 00:01:44.089 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:44.089 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.089 ++ LOGO=fedora-logo-icon 00:01:44.089 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:44.089 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.089 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:44.089 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.089 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.089 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.089 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:44.089 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.089 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:44.089 ++ SUPPORT_END=2024-11-12 00:01:44.089 ++ VARIANT='Cloud Edition' 00:01:44.089 ++ VARIANT_ID=cloud 00:01:44.089 + uname -a 00:01:44.089 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:44.089 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:44.347 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:44.347 Hugepages 00:01:44.347 node hugesize free / total 00:01:44.347 node0 1048576kB 0 / 0 00:01:44.605 node0 2048kB 0 / 0 00:01:44.605 00:01:44.605 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:44.605 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:44.605 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:44.605 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:44.605 + rm -f /tmp/spdk-ld-path 00:01:44.605 + source autorun-spdk.conf 00:01:44.605 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.605 ++ SPDK_TEST_NVMF=1 00:01:44.605 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.605 ++ SPDK_TEST_URING=1 00:01:44.605 ++ SPDK_TEST_USDT=1 00:01:44.605 ++ SPDK_RUN_UBSAN=1 00:01:44.605 ++ NET_TYPE=virt 00:01:44.605 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.605 ++ RUN_NIGHTLY=0 00:01:44.605 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:44.605 + [[ -n '' ]] 00:01:44.605 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:44.605 + for M in /var/spdk/build-*-manifest.txt 00:01:44.605 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:44.605 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.605 + for M in /var/spdk/build-*-manifest.txt 00:01:44.605 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:44.605 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.605 + for M in /var/spdk/build-*-manifest.txt 00:01:44.605 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:44.605 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:44.605 ++ uname 00:01:44.605 + [[ Linux == \L\i\n\u\x ]] 00:01:44.605 + sudo dmesg -T 00:01:44.605 + sudo dmesg --clear 00:01:44.605 + dmesg_pid=5256 00:01:44.605 + [[ Fedora Linux == FreeBSD ]] 00:01:44.605 + sudo dmesg -Tw 00:01:44.605 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.605 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:44.605 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:44.605 + [[ -x /usr/src/fio-static/fio ]] 00:01:44.605 + export FIO_BIN=/usr/src/fio-static/fio 00:01:44.605 + FIO_BIN=/usr/src/fio-static/fio 00:01:44.605 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:44.605 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:44.605 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:44.605 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.605 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:44.605 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:44.605 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.605 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:44.605 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.865 09:24:30 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:44.865 09:24:30 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_USDT=1 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@7 -- $ NET_TYPE=virt 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:44.865 09:24:30 -- spdk_repo/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:44.865 09:24:30 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:44.865 09:24:30 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:44.865 09:24:30 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:44.865 09:24:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:44.865 09:24:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:44.865 09:24:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:44.865 09:24:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:44.865 09:24:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:44.865 09:24:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.865 09:24:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.865 09:24:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.865 09:24:30 -- paths/export.sh@5 -- $ export PATH 00:01:44.865 09:24:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:44.865 09:24:30 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:44.865 09:24:30 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:44.865 09:24:30 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730798670.XXXXXX 00:01:44.865 09:24:30 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730798670.Q5mhR5 00:01:44.865 09:24:30 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:44.865 09:24:30 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:44.865 09:24:30 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:44.865 09:24:30 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:44.865 09:24:30 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:44.865 09:24:30 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:44.865 09:24:30 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:44.865 09:24:30 -- common/autotest_common.sh@10 -- $ set +x 00:01:44.865 09:24:30 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring' 00:01:44.865 09:24:30 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:44.865 09:24:30 -- pm/common@17 -- $ local monitor 00:01:44.865 09:24:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.865 09:24:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:44.865 09:24:30 -- pm/common@25 -- $ sleep 1 00:01:44.865 09:24:30 -- pm/common@21 -- $ date +%s 00:01:44.865 09:24:30 -- pm/common@21 -- $ date +%s 00:01:44.865 09:24:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730798670 00:01:44.865 09:24:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730798670 00:01:44.865 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730798670_collect-cpu-load.pm.log 00:01:44.865 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730798670_collect-vmstat.pm.log 00:01:45.803 09:24:31 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:45.803 09:24:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:45.803 09:24:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:45.803 09:24:31 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:45.803 09:24:31 -- spdk/autobuild.sh@16 -- $ date -u 00:01:45.803 Tue Nov 5 09:24:31 AM UTC 2024 00:01:45.803 09:24:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:45.803 v25.01-pre-160-g6b98809f9 00:01:45.803 09:24:31 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:45.803 09:24:31 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:45.804 09:24:31 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:45.804 09:24:31 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:45.804 09:24:31 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:45.804 09:24:31 -- common/autotest_common.sh@10 -- $ set +x 00:01:45.804 ************************************ 00:01:45.804 START TEST ubsan 00:01:45.804 ************************************ 00:01:45.804 using ubsan 00:01:45.804 09:24:31 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:45.804 00:01:45.804 real 0m0.000s 00:01:45.804 user 0m0.000s 00:01:45.804 sys 0m0.000s 00:01:45.804 09:24:31 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:45.804 09:24:31 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:45.804 ************************************ 00:01:45.804 END TEST ubsan 00:01:45.804 ************************************ 00:01:45.804 09:24:31 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:45.804 09:24:31 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:45.804 09:24:31 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:45.804 09:24:31 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:45.804 09:24:31 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:45.804 09:24:31 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:45.804 09:24:31 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:45.804 09:24:31 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:45.804 09:24:31 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-shared 00:01:46.063 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:46.063 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:46.632 Using 'verbs' RDMA provider 00:02:02.452 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:14.702 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:14.702 Creating mk/config.mk...done. 00:02:14.702 Creating mk/cc.flags.mk...done. 00:02:14.702 Type 'make' to build. 00:02:14.702 09:24:59 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:14.702 09:24:59 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:14.702 09:24:59 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:14.702 09:24:59 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.702 ************************************ 00:02:14.702 START TEST make 00:02:14.702 ************************************ 00:02:14.702 09:24:59 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:14.702 make[1]: Nothing to be done for 'all'. 00:02:26.901 The Meson build system 00:02:26.901 Version: 1.5.0 00:02:26.901 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:26.901 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:26.901 Build type: native build 00:02:26.901 Program cat found: YES (/usr/bin/cat) 00:02:26.901 Project name: DPDK 00:02:26.901 Project version: 24.03.0 00:02:26.901 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:26.901 C linker for the host machine: cc ld.bfd 2.40-14 00:02:26.901 Host machine cpu family: x86_64 00:02:26.901 Host machine cpu: x86_64 00:02:26.901 Message: ## Building in Developer Mode ## 00:02:26.901 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:26.901 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:26.901 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:26.901 Program python3 found: YES (/usr/bin/python3) 00:02:26.901 Program cat found: YES (/usr/bin/cat) 00:02:26.901 Compiler for C supports arguments -march=native: YES 00:02:26.901 Checking for size of "void *" : 8 00:02:26.901 Checking for size of "void *" : 8 (cached) 00:02:26.901 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:26.901 Library m found: YES 00:02:26.901 Library numa found: YES 00:02:26.901 Has header "numaif.h" : YES 00:02:26.901 Library fdt found: NO 00:02:26.901 Library execinfo found: NO 00:02:26.901 Has header "execinfo.h" : YES 00:02:26.901 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:26.901 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:26.901 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:26.901 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:26.901 Run-time dependency openssl found: YES 3.1.1 00:02:26.901 Run-time dependency libpcap found: YES 1.10.4 00:02:26.901 Has header "pcap.h" with dependency libpcap: YES 00:02:26.901 Compiler for C supports arguments -Wcast-qual: YES 00:02:26.901 Compiler for C supports arguments -Wdeprecated: YES 00:02:26.901 Compiler for C supports arguments -Wformat: YES 00:02:26.901 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:26.901 Compiler for C supports arguments -Wformat-security: NO 00:02:26.901 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:26.901 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:26.901 Compiler for C supports arguments -Wnested-externs: YES 00:02:26.901 Compiler for C supports arguments -Wold-style-definition: YES 00:02:26.901 Compiler for C supports arguments -Wpointer-arith: YES 00:02:26.901 Compiler for C supports arguments -Wsign-compare: YES 00:02:26.901 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:26.901 Compiler for C supports arguments -Wundef: YES 00:02:26.901 Compiler for C supports arguments -Wwrite-strings: YES 00:02:26.901 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:26.901 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:26.901 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:26.901 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:26.901 Program objdump found: YES (/usr/bin/objdump) 00:02:26.901 Compiler for C supports arguments -mavx512f: YES 00:02:26.901 Checking if "AVX512 checking" compiles: YES 00:02:26.901 Fetching value of define "__SSE4_2__" : 1 00:02:26.901 Fetching value of define "__AES__" : 1 00:02:26.901 Fetching value of define "__AVX__" : 1 00:02:26.901 Fetching value of define "__AVX2__" : 1 00:02:26.901 Fetching value of define "__AVX512BW__" : (undefined) 00:02:26.901 Fetching value of define "__AVX512CD__" : (undefined) 00:02:26.901 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:26.901 Fetching value of define "__AVX512F__" : (undefined) 00:02:26.901 Fetching value of define "__AVX512VL__" : (undefined) 00:02:26.901 Fetching value of define "__PCLMUL__" : 1 00:02:26.901 Fetching value of define "__RDRND__" : 1 00:02:26.901 Fetching value of define "__RDSEED__" : 1 00:02:26.901 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:26.901 Fetching value of define "__znver1__" : (undefined) 00:02:26.901 Fetching value of define "__znver2__" : (undefined) 00:02:26.901 Fetching value of define "__znver3__" : (undefined) 00:02:26.901 Fetching value of define "__znver4__" : (undefined) 00:02:26.901 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:26.901 Message: lib/log: Defining dependency "log" 00:02:26.901 Message: lib/kvargs: Defining dependency "kvargs" 00:02:26.901 Message: lib/telemetry: Defining dependency "telemetry" 00:02:26.901 Checking for function "getentropy" : NO 00:02:26.901 Message: lib/eal: Defining dependency "eal" 00:02:26.901 Message: lib/ring: Defining dependency "ring" 00:02:26.901 Message: lib/rcu: Defining dependency "rcu" 00:02:26.901 Message: lib/mempool: Defining dependency "mempool" 00:02:26.901 Message: lib/mbuf: Defining dependency "mbuf" 00:02:26.901 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:26.901 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:26.901 Compiler for C supports arguments -mpclmul: YES 00:02:26.901 Compiler for C supports arguments -maes: YES 00:02:26.901 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:26.901 Compiler for C supports arguments -mavx512bw: YES 00:02:26.901 Compiler for C supports arguments -mavx512dq: YES 00:02:26.901 Compiler for C supports arguments -mavx512vl: YES 00:02:26.901 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:26.901 Compiler for C supports arguments -mavx2: YES 00:02:26.901 Compiler for C supports arguments -mavx: YES 00:02:26.901 Message: lib/net: Defining dependency "net" 00:02:26.901 Message: lib/meter: Defining dependency "meter" 00:02:26.901 Message: lib/ethdev: Defining dependency "ethdev" 00:02:26.901 Message: lib/pci: Defining dependency "pci" 00:02:26.901 Message: lib/cmdline: Defining dependency "cmdline" 00:02:26.901 Message: lib/hash: Defining dependency "hash" 00:02:26.901 Message: lib/timer: Defining dependency "timer" 00:02:26.901 Message: lib/compressdev: Defining dependency "compressdev" 00:02:26.901 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:26.901 Message: lib/dmadev: Defining dependency "dmadev" 00:02:26.901 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:26.901 Message: lib/power: Defining dependency "power" 00:02:26.901 Message: lib/reorder: Defining dependency "reorder" 00:02:26.901 Message: lib/security: Defining dependency "security" 00:02:26.901 Has header "linux/userfaultfd.h" : YES 00:02:26.901 Has header "linux/vduse.h" : YES 00:02:26.901 Message: lib/vhost: Defining dependency "vhost" 00:02:26.901 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:26.901 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:26.901 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:26.901 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:26.901 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:26.901 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:26.901 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:26.901 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:26.901 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:26.901 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:26.901 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:26.901 Configuring doxy-api-html.conf using configuration 00:02:26.901 Configuring doxy-api-man.conf using configuration 00:02:26.901 Program mandb found: YES (/usr/bin/mandb) 00:02:26.901 Program sphinx-build found: NO 00:02:26.901 Configuring rte_build_config.h using configuration 00:02:26.901 Message: 00:02:26.901 ================= 00:02:26.901 Applications Enabled 00:02:26.901 ================= 00:02:26.901 00:02:26.901 apps: 00:02:26.901 00:02:26.901 00:02:26.901 Message: 00:02:26.901 ================= 00:02:26.901 Libraries Enabled 00:02:26.901 ================= 00:02:26.901 00:02:26.901 libs: 00:02:26.901 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:26.901 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:26.901 cryptodev, dmadev, power, reorder, security, vhost, 00:02:26.901 00:02:26.901 Message: 00:02:26.901 =============== 00:02:26.901 Drivers Enabled 00:02:26.901 =============== 00:02:26.901 00:02:26.901 common: 00:02:26.901 00:02:26.901 bus: 00:02:26.901 pci, vdev, 00:02:26.901 mempool: 00:02:26.901 ring, 00:02:26.901 dma: 00:02:26.901 00:02:26.901 net: 00:02:26.901 00:02:26.901 crypto: 00:02:26.901 00:02:26.901 compress: 00:02:26.901 00:02:26.901 vdpa: 00:02:26.901 00:02:26.901 00:02:26.901 Message: 00:02:26.901 ================= 00:02:26.901 Content Skipped 00:02:26.901 ================= 00:02:26.901 00:02:26.901 apps: 00:02:26.901 dumpcap: explicitly disabled via build config 00:02:26.901 graph: explicitly disabled via build config 00:02:26.901 pdump: explicitly disabled via build config 00:02:26.902 proc-info: explicitly disabled via build config 00:02:26.902 test-acl: explicitly disabled via build config 00:02:26.902 test-bbdev: explicitly disabled via build config 00:02:26.902 test-cmdline: explicitly disabled via build config 00:02:26.902 test-compress-perf: explicitly disabled via build config 00:02:26.902 test-crypto-perf: explicitly disabled via build config 00:02:26.902 test-dma-perf: explicitly disabled via build config 00:02:26.902 test-eventdev: explicitly disabled via build config 00:02:26.902 test-fib: explicitly disabled via build config 00:02:26.902 test-flow-perf: explicitly disabled via build config 00:02:26.902 test-gpudev: explicitly disabled via build config 00:02:26.902 test-mldev: explicitly disabled via build config 00:02:26.902 test-pipeline: explicitly disabled via build config 00:02:26.902 test-pmd: explicitly disabled via build config 00:02:26.902 test-regex: explicitly disabled via build config 00:02:26.902 test-sad: explicitly disabled via build config 00:02:26.902 test-security-perf: explicitly disabled via build config 00:02:26.902 00:02:26.902 libs: 00:02:26.902 argparse: explicitly disabled via build config 00:02:26.902 metrics: explicitly disabled via build config 00:02:26.902 acl: explicitly disabled via build config 00:02:26.902 bbdev: explicitly disabled via build config 00:02:26.902 bitratestats: explicitly disabled via build config 00:02:26.902 bpf: explicitly disabled via build config 00:02:26.902 cfgfile: explicitly disabled via build config 00:02:26.902 distributor: explicitly disabled via build config 00:02:26.902 efd: explicitly disabled via build config 00:02:26.902 eventdev: explicitly disabled via build config 00:02:26.902 dispatcher: explicitly disabled via build config 00:02:26.902 gpudev: explicitly disabled via build config 00:02:26.902 gro: explicitly disabled via build config 00:02:26.902 gso: explicitly disabled via build config 00:02:26.902 ip_frag: explicitly disabled via build config 00:02:26.902 jobstats: explicitly disabled via build config 00:02:26.902 latencystats: explicitly disabled via build config 00:02:26.902 lpm: explicitly disabled via build config 00:02:26.902 member: explicitly disabled via build config 00:02:26.902 pcapng: explicitly disabled via build config 00:02:26.902 rawdev: explicitly disabled via build config 00:02:26.902 regexdev: explicitly disabled via build config 00:02:26.902 mldev: explicitly disabled via build config 00:02:26.902 rib: explicitly disabled via build config 00:02:26.902 sched: explicitly disabled via build config 00:02:26.902 stack: explicitly disabled via build config 00:02:26.902 ipsec: explicitly disabled via build config 00:02:26.902 pdcp: explicitly disabled via build config 00:02:26.902 fib: explicitly disabled via build config 00:02:26.902 port: explicitly disabled via build config 00:02:26.902 pdump: explicitly disabled via build config 00:02:26.902 table: explicitly disabled via build config 00:02:26.902 pipeline: explicitly disabled via build config 00:02:26.902 graph: explicitly disabled via build config 00:02:26.902 node: explicitly disabled via build config 00:02:26.902 00:02:26.902 drivers: 00:02:26.902 common/cpt: not in enabled drivers build config 00:02:26.902 common/dpaax: not in enabled drivers build config 00:02:26.902 common/iavf: not in enabled drivers build config 00:02:26.902 common/idpf: not in enabled drivers build config 00:02:26.902 common/ionic: not in enabled drivers build config 00:02:26.902 common/mvep: not in enabled drivers build config 00:02:26.902 common/octeontx: not in enabled drivers build config 00:02:26.902 bus/auxiliary: not in enabled drivers build config 00:02:26.902 bus/cdx: not in enabled drivers build config 00:02:26.902 bus/dpaa: not in enabled drivers build config 00:02:26.902 bus/fslmc: not in enabled drivers build config 00:02:26.902 bus/ifpga: not in enabled drivers build config 00:02:26.902 bus/platform: not in enabled drivers build config 00:02:26.902 bus/uacce: not in enabled drivers build config 00:02:26.902 bus/vmbus: not in enabled drivers build config 00:02:26.902 common/cnxk: not in enabled drivers build config 00:02:26.902 common/mlx5: not in enabled drivers build config 00:02:26.902 common/nfp: not in enabled drivers build config 00:02:26.902 common/nitrox: not in enabled drivers build config 00:02:26.902 common/qat: not in enabled drivers build config 00:02:26.902 common/sfc_efx: not in enabled drivers build config 00:02:26.902 mempool/bucket: not in enabled drivers build config 00:02:26.902 mempool/cnxk: not in enabled drivers build config 00:02:26.902 mempool/dpaa: not in enabled drivers build config 00:02:26.902 mempool/dpaa2: not in enabled drivers build config 00:02:26.902 mempool/octeontx: not in enabled drivers build config 00:02:26.902 mempool/stack: not in enabled drivers build config 00:02:26.902 dma/cnxk: not in enabled drivers build config 00:02:26.902 dma/dpaa: not in enabled drivers build config 00:02:26.902 dma/dpaa2: not in enabled drivers build config 00:02:26.902 dma/hisilicon: not in enabled drivers build config 00:02:26.902 dma/idxd: not in enabled drivers build config 00:02:26.902 dma/ioat: not in enabled drivers build config 00:02:26.902 dma/skeleton: not in enabled drivers build config 00:02:26.902 net/af_packet: not in enabled drivers build config 00:02:26.902 net/af_xdp: not in enabled drivers build config 00:02:26.902 net/ark: not in enabled drivers build config 00:02:26.902 net/atlantic: not in enabled drivers build config 00:02:26.902 net/avp: not in enabled drivers build config 00:02:26.902 net/axgbe: not in enabled drivers build config 00:02:26.902 net/bnx2x: not in enabled drivers build config 00:02:26.902 net/bnxt: not in enabled drivers build config 00:02:26.902 net/bonding: not in enabled drivers build config 00:02:26.902 net/cnxk: not in enabled drivers build config 00:02:26.902 net/cpfl: not in enabled drivers build config 00:02:26.902 net/cxgbe: not in enabled drivers build config 00:02:26.902 net/dpaa: not in enabled drivers build config 00:02:26.902 net/dpaa2: not in enabled drivers build config 00:02:26.902 net/e1000: not in enabled drivers build config 00:02:26.902 net/ena: not in enabled drivers build config 00:02:26.902 net/enetc: not in enabled drivers build config 00:02:26.902 net/enetfec: not in enabled drivers build config 00:02:26.902 net/enic: not in enabled drivers build config 00:02:26.902 net/failsafe: not in enabled drivers build config 00:02:26.902 net/fm10k: not in enabled drivers build config 00:02:26.902 net/gve: not in enabled drivers build config 00:02:26.902 net/hinic: not in enabled drivers build config 00:02:26.902 net/hns3: not in enabled drivers build config 00:02:26.902 net/i40e: not in enabled drivers build config 00:02:26.902 net/iavf: not in enabled drivers build config 00:02:26.902 net/ice: not in enabled drivers build config 00:02:26.902 net/idpf: not in enabled drivers build config 00:02:26.902 net/igc: not in enabled drivers build config 00:02:26.902 net/ionic: not in enabled drivers build config 00:02:26.902 net/ipn3ke: not in enabled drivers build config 00:02:26.902 net/ixgbe: not in enabled drivers build config 00:02:26.902 net/mana: not in enabled drivers build config 00:02:26.902 net/memif: not in enabled drivers build config 00:02:26.902 net/mlx4: not in enabled drivers build config 00:02:26.902 net/mlx5: not in enabled drivers build config 00:02:26.902 net/mvneta: not in enabled drivers build config 00:02:26.902 net/mvpp2: not in enabled drivers build config 00:02:26.902 net/netvsc: not in enabled drivers build config 00:02:26.902 net/nfb: not in enabled drivers build config 00:02:26.902 net/nfp: not in enabled drivers build config 00:02:26.902 net/ngbe: not in enabled drivers build config 00:02:26.902 net/null: not in enabled drivers build config 00:02:26.902 net/octeontx: not in enabled drivers build config 00:02:26.902 net/octeon_ep: not in enabled drivers build config 00:02:26.902 net/pcap: not in enabled drivers build config 00:02:26.902 net/pfe: not in enabled drivers build config 00:02:26.902 net/qede: not in enabled drivers build config 00:02:26.902 net/ring: not in enabled drivers build config 00:02:26.902 net/sfc: not in enabled drivers build config 00:02:26.902 net/softnic: not in enabled drivers build config 00:02:26.902 net/tap: not in enabled drivers build config 00:02:26.902 net/thunderx: not in enabled drivers build config 00:02:26.902 net/txgbe: not in enabled drivers build config 00:02:26.902 net/vdev_netvsc: not in enabled drivers build config 00:02:26.902 net/vhost: not in enabled drivers build config 00:02:26.902 net/virtio: not in enabled drivers build config 00:02:26.902 net/vmxnet3: not in enabled drivers build config 00:02:26.902 raw/*: missing internal dependency, "rawdev" 00:02:26.902 crypto/armv8: not in enabled drivers build config 00:02:26.902 crypto/bcmfs: not in enabled drivers build config 00:02:26.902 crypto/caam_jr: not in enabled drivers build config 00:02:26.902 crypto/ccp: not in enabled drivers build config 00:02:26.902 crypto/cnxk: not in enabled drivers build config 00:02:26.902 crypto/dpaa_sec: not in enabled drivers build config 00:02:26.902 crypto/dpaa2_sec: not in enabled drivers build config 00:02:26.902 crypto/ipsec_mb: not in enabled drivers build config 00:02:26.902 crypto/mlx5: not in enabled drivers build config 00:02:26.902 crypto/mvsam: not in enabled drivers build config 00:02:26.902 crypto/nitrox: not in enabled drivers build config 00:02:26.902 crypto/null: not in enabled drivers build config 00:02:26.902 crypto/octeontx: not in enabled drivers build config 00:02:26.902 crypto/openssl: not in enabled drivers build config 00:02:26.902 crypto/scheduler: not in enabled drivers build config 00:02:26.902 crypto/uadk: not in enabled drivers build config 00:02:26.902 crypto/virtio: not in enabled drivers build config 00:02:26.902 compress/isal: not in enabled drivers build config 00:02:26.902 compress/mlx5: not in enabled drivers build config 00:02:26.902 compress/nitrox: not in enabled drivers build config 00:02:26.902 compress/octeontx: not in enabled drivers build config 00:02:26.902 compress/zlib: not in enabled drivers build config 00:02:26.902 regex/*: missing internal dependency, "regexdev" 00:02:26.902 ml/*: missing internal dependency, "mldev" 00:02:26.902 vdpa/ifc: not in enabled drivers build config 00:02:26.902 vdpa/mlx5: not in enabled drivers build config 00:02:26.902 vdpa/nfp: not in enabled drivers build config 00:02:26.902 vdpa/sfc: not in enabled drivers build config 00:02:26.902 event/*: missing internal dependency, "eventdev" 00:02:26.902 baseband/*: missing internal dependency, "bbdev" 00:02:26.902 gpu/*: missing internal dependency, "gpudev" 00:02:26.902 00:02:26.902 00:02:26.902 Build targets in project: 85 00:02:26.902 00:02:26.902 DPDK 24.03.0 00:02:26.902 00:02:26.902 User defined options 00:02:26.902 buildtype : debug 00:02:26.902 default_library : shared 00:02:26.902 libdir : lib 00:02:26.902 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:26.902 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:26.902 c_link_args : 00:02:26.903 cpu_instruction_set: native 00:02:26.903 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:26.903 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:26.903 enable_docs : false 00:02:26.903 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:26.903 enable_kmods : false 00:02:26.903 max_lcores : 128 00:02:26.903 tests : false 00:02:26.903 00:02:26.903 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:26.903 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:26.903 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:26.903 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:26.903 [3/268] Linking static target lib/librte_kvargs.a 00:02:26.903 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:26.903 [5/268] Linking static target lib/librte_log.a 00:02:26.903 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:27.161 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.161 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:27.161 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:27.161 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:27.419 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:27.419 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:27.419 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:27.419 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:27.419 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:27.677 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:27.677 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:27.677 [18/268] Linking static target lib/librte_telemetry.a 00:02:27.677 [19/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.677 [20/268] Linking target lib/librte_log.so.24.1 00:02:27.935 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:27.935 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:28.192 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:28.192 [24/268] Linking target lib/librte_kvargs.so.24.1 00:02:28.192 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:28.192 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:28.192 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:28.449 [28/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:28.449 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:28.449 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:28.449 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:28.449 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:28.449 [33/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.708 [34/268] Linking target lib/librte_telemetry.so.24.1 00:02:28.708 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:28.966 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:28.966 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:28.966 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:29.224 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:29.224 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:29.224 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:29.224 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:29.224 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:29.224 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:29.224 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:29.482 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:29.482 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:29.482 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:29.482 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:29.740 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:29.999 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:29.999 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:29.999 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:29.999 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:30.257 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:30.257 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:30.257 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:30.515 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:30.515 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:30.515 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:30.515 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:30.773 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:31.031 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:31.031 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:31.031 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:31.031 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:31.289 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:31.289 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:31.547 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:31.547 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:31.547 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:31.547 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:31.805 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:31.805 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:31.805 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.064 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.064 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.064 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:32.064 [79/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:32.064 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.064 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.064 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.321 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.321 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:32.321 [85/268] Linking static target lib/librte_eal.a 00:02:32.580 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:32.580 [87/268] Linking static target lib/librte_ring.a 00:02:32.580 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:32.580 [89/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:32.871 [90/268] Linking static target lib/librte_rcu.a 00:02:32.871 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:32.871 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:32.871 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:32.871 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:33.129 [95/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.129 [96/268] Linking static target lib/librte_mempool.a 00:02:33.129 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.129 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:33.129 [99/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.129 [100/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:33.386 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.644 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.644 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:33.644 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.644 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.644 [106/268] Linking static target lib/librte_mbuf.a 00:02:33.644 [107/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.901 [108/268] Linking static target lib/librte_meter.a 00:02:33.901 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:34.158 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.158 [111/268] Linking static target lib/librte_net.a 00:02:34.158 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:34.158 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:34.158 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.158 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:34.416 [116/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.673 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.673 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:34.673 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.930 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:34.930 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.188 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.445 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.445 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.702 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.702 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:35.702 [127/268] Linking static target lib/librte_pci.a 00:02:35.702 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:35.702 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.702 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.702 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:35.702 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:35.960 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:35.960 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:35.960 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:35.960 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:35.960 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.960 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.218 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:36.218 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:36.218 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:36.218 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:36.218 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:36.218 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:36.218 [145/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:36.475 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:36.475 [147/268] Linking static target lib/librte_ethdev.a 00:02:36.475 [148/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:36.475 [149/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:36.475 [150/268] Linking static target lib/librte_cmdline.a 00:02:36.734 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:36.992 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:36.992 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:37.250 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:37.250 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:37.250 [156/268] Linking static target lib/librte_timer.a 00:02:37.250 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:37.250 [158/268] Linking static target lib/librte_hash.a 00:02:37.250 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:37.508 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:37.508 [161/268] Linking static target lib/librte_compressdev.a 00:02:37.508 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:37.767 [163/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:37.767 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.767 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:37.767 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:38.024 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:38.282 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:38.282 [169/268] Linking static target lib/librte_dmadev.a 00:02:38.282 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.282 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:38.282 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.553 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:38.553 [174/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.553 [175/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:38.553 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:38.553 [177/268] Linking static target lib/librte_cryptodev.a 00:02:38.553 [178/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:38.847 [179/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:39.121 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:39.121 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:39.121 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:39.121 [183/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.121 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:39.389 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:39.389 [186/268] Linking static target lib/librte_power.a 00:02:39.389 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:39.389 [188/268] Linking static target lib/librte_reorder.a 00:02:39.646 [189/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:39.905 [190/268] Linking static target lib/librte_security.a 00:02:39.905 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:39.905 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:39.905 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:39.905 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.162 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:40.728 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.728 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.728 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:40.728 [199/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:40.728 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:40.985 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:40.985 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.243 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:41.243 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:41.501 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:41.501 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:41.759 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:41.759 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:41.759 [209/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:41.759 [210/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:41.759 [211/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:41.759 [212/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:42.016 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:42.016 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.016 [215/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:42.016 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:42.016 [217/268] Linking static target drivers/librte_bus_pci.a 00:02:42.017 [218/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:42.017 [219/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:42.017 [220/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.017 [221/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:42.017 [222/268] Linking static target drivers/librte_bus_vdev.a 00:02:42.278 [223/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:42.278 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.278 [225/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:42.278 [226/268] Linking static target drivers/librte_mempool_ring.a 00:02:42.278 [227/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.540 [228/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.105 [229/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:43.105 [230/268] Linking static target lib/librte_vhost.a 00:02:43.671 [231/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.671 [232/268] Linking target lib/librte_eal.so.24.1 00:02:43.940 [233/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:43.940 [234/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:43.940 [235/268] Linking target lib/librte_meter.so.24.1 00:02:43.940 [236/268] Linking target lib/librte_ring.so.24.1 00:02:43.940 [237/268] Linking target lib/librte_timer.so.24.1 00:02:43.940 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:43.940 [239/268] Linking target lib/librte_pci.so.24.1 00:02:43.940 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:44.204 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:44.204 [242/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:44.204 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:44.204 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:44.204 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:44.204 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:44.204 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:44.204 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:44.204 [249/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:44.204 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:44.204 [251/268] Linking target lib/librte_mbuf.so.24.1 00:02:44.204 [252/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.461 [253/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.461 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:44.461 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:44.461 [256/268] Linking target lib/librte_net.so.24.1 00:02:44.461 [257/268] Linking target lib/librte_reorder.so.24.1 00:02:44.461 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:44.718 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:44.718 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:44.718 [261/268] Linking target lib/librte_hash.so.24.1 00:02:44.718 [262/268] Linking target lib/librte_security.so.24.1 00:02:44.718 [263/268] Linking target lib/librte_cmdline.so.24.1 00:02:44.718 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:44.718 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:44.718 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:44.975 [267/268] Linking target lib/librte_power.so.24.1 00:02:44.975 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:44.975 INFO: autodetecting backend as ninja 00:02:44.975 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:11.506 CC lib/ut_mock/mock.o 00:03:11.506 CC lib/log/log.o 00:03:11.506 CC lib/log/log_flags.o 00:03:11.506 CC lib/log/log_deprecated.o 00:03:11.506 CC lib/ut/ut.o 00:03:11.506 LIB libspdk_ut.a 00:03:11.506 LIB libspdk_log.a 00:03:11.506 LIB libspdk_ut_mock.a 00:03:11.506 SO libspdk_ut.so.2.0 00:03:11.506 SO libspdk_ut_mock.so.6.0 00:03:11.506 SO libspdk_log.so.7.1 00:03:11.506 SYMLINK libspdk_ut.so 00:03:11.506 SYMLINK libspdk_ut_mock.so 00:03:11.506 SYMLINK libspdk_log.so 00:03:11.506 CC lib/util/base64.o 00:03:11.506 CC lib/util/bit_array.o 00:03:11.506 CC lib/util/cpuset.o 00:03:11.506 CC lib/util/crc16.o 00:03:11.506 CXX lib/trace_parser/trace.o 00:03:11.506 CC lib/util/crc32.o 00:03:11.506 CC lib/util/crc32c.o 00:03:11.506 CC lib/ioat/ioat.o 00:03:11.506 CC lib/dma/dma.o 00:03:11.764 CC lib/vfio_user/host/vfio_user_pci.o 00:03:11.764 CC lib/util/crc32_ieee.o 00:03:11.764 CC lib/util/crc64.o 00:03:11.764 CC lib/vfio_user/host/vfio_user.o 00:03:11.764 CC lib/util/dif.o 00:03:11.764 CC lib/util/fd.o 00:03:11.764 CC lib/util/fd_group.o 00:03:11.764 LIB libspdk_dma.a 00:03:11.764 CC lib/util/file.o 00:03:12.023 SO libspdk_dma.so.5.0 00:03:12.023 CC lib/util/hexlify.o 00:03:12.023 LIB libspdk_ioat.a 00:03:12.023 SO libspdk_ioat.so.7.0 00:03:12.023 SYMLINK libspdk_dma.so 00:03:12.023 CC lib/util/iov.o 00:03:12.023 CC lib/util/math.o 00:03:12.023 CC lib/util/net.o 00:03:12.023 LIB libspdk_vfio_user.a 00:03:12.023 SYMLINK libspdk_ioat.so 00:03:12.023 CC lib/util/pipe.o 00:03:12.023 CC lib/util/strerror_tls.o 00:03:12.023 SO libspdk_vfio_user.so.5.0 00:03:12.023 CC lib/util/string.o 00:03:12.023 SYMLINK libspdk_vfio_user.so 00:03:12.023 CC lib/util/uuid.o 00:03:12.023 CC lib/util/xor.o 00:03:12.281 CC lib/util/zipf.o 00:03:12.281 CC lib/util/md5.o 00:03:12.281 LIB libspdk_util.a 00:03:12.540 SO libspdk_util.so.10.1 00:03:12.798 SYMLINK libspdk_util.so 00:03:12.798 LIB libspdk_trace_parser.a 00:03:12.798 SO libspdk_trace_parser.so.6.0 00:03:12.798 CC lib/idxd/idxd.o 00:03:12.798 CC lib/idxd/idxd_user.o 00:03:12.798 CC lib/idxd/idxd_kernel.o 00:03:12.798 CC lib/rdma_provider/common.o 00:03:12.798 CC lib/rdma_utils/rdma_utils.o 00:03:12.798 CC lib/vmd/vmd.o 00:03:12.798 CC lib/conf/conf.o 00:03:12.798 CC lib/json/json_parse.o 00:03:12.798 CC lib/env_dpdk/env.o 00:03:12.798 SYMLINK libspdk_trace_parser.so 00:03:12.798 CC lib/env_dpdk/memory.o 00:03:13.056 CC lib/vmd/led.o 00:03:13.056 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:13.056 LIB libspdk_conf.a 00:03:13.056 CC lib/json/json_util.o 00:03:13.056 CC lib/json/json_write.o 00:03:13.056 SO libspdk_conf.so.6.0 00:03:13.056 LIB libspdk_rdma_utils.a 00:03:13.314 SYMLINK libspdk_conf.so 00:03:13.314 CC lib/env_dpdk/pci.o 00:03:13.314 CC lib/env_dpdk/init.o 00:03:13.314 SO libspdk_rdma_utils.so.1.0 00:03:13.314 LIB libspdk_rdma_provider.a 00:03:13.314 SYMLINK libspdk_rdma_utils.so 00:03:13.314 CC lib/env_dpdk/threads.o 00:03:13.314 SO libspdk_rdma_provider.so.6.0 00:03:13.314 CC lib/env_dpdk/pci_ioat.o 00:03:13.314 SYMLINK libspdk_rdma_provider.so 00:03:13.314 CC lib/env_dpdk/pci_virtio.o 00:03:13.314 LIB libspdk_json.a 00:03:13.314 CC lib/env_dpdk/pci_vmd.o 00:03:13.572 SO libspdk_json.so.6.0 00:03:13.572 LIB libspdk_idxd.a 00:03:13.572 CC lib/env_dpdk/pci_idxd.o 00:03:13.572 CC lib/env_dpdk/pci_event.o 00:03:13.572 LIB libspdk_vmd.a 00:03:13.572 SO libspdk_idxd.so.12.1 00:03:13.572 SYMLINK libspdk_json.so 00:03:13.572 CC lib/env_dpdk/sigbus_handler.o 00:03:13.572 SO libspdk_vmd.so.6.0 00:03:13.572 SYMLINK libspdk_idxd.so 00:03:13.572 CC lib/env_dpdk/pci_dpdk.o 00:03:13.572 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:13.572 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:13.572 SYMLINK libspdk_vmd.so 00:03:13.831 CC lib/jsonrpc/jsonrpc_server.o 00:03:13.831 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:13.831 CC lib/jsonrpc/jsonrpc_client.o 00:03:13.831 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:14.090 LIB libspdk_jsonrpc.a 00:03:14.090 SO libspdk_jsonrpc.so.6.0 00:03:14.090 SYMLINK libspdk_jsonrpc.so 00:03:14.348 LIB libspdk_env_dpdk.a 00:03:14.348 CC lib/rpc/rpc.o 00:03:14.348 SO libspdk_env_dpdk.so.15.1 00:03:14.607 SYMLINK libspdk_env_dpdk.so 00:03:14.607 LIB libspdk_rpc.a 00:03:14.607 SO libspdk_rpc.so.6.0 00:03:14.866 SYMLINK libspdk_rpc.so 00:03:14.866 CC lib/notify/notify.o 00:03:14.866 CC lib/notify/notify_rpc.o 00:03:15.133 CC lib/keyring/keyring.o 00:03:15.133 CC lib/keyring/keyring_rpc.o 00:03:15.133 CC lib/trace/trace.o 00:03:15.133 CC lib/trace/trace_rpc.o 00:03:15.133 CC lib/trace/trace_flags.o 00:03:15.133 LIB libspdk_notify.a 00:03:15.133 SO libspdk_notify.so.6.0 00:03:15.395 LIB libspdk_keyring.a 00:03:15.395 LIB libspdk_trace.a 00:03:15.395 SYMLINK libspdk_notify.so 00:03:15.395 SO libspdk_keyring.so.2.0 00:03:15.395 SO libspdk_trace.so.11.0 00:03:15.395 SYMLINK libspdk_keyring.so 00:03:15.395 SYMLINK libspdk_trace.so 00:03:15.653 CC lib/thread/iobuf.o 00:03:15.653 CC lib/thread/thread.o 00:03:15.653 CC lib/sock/sock.o 00:03:15.653 CC lib/sock/sock_rpc.o 00:03:16.220 LIB libspdk_sock.a 00:03:16.220 SO libspdk_sock.so.10.0 00:03:16.220 SYMLINK libspdk_sock.so 00:03:16.479 CC lib/nvme/nvme_ctrlr.o 00:03:16.479 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:16.479 CC lib/nvme/nvme_fabric.o 00:03:16.479 CC lib/nvme/nvme_ns_cmd.o 00:03:16.479 CC lib/nvme/nvme_pcie_common.o 00:03:16.479 CC lib/nvme/nvme_pcie.o 00:03:16.479 CC lib/nvme/nvme_ns.o 00:03:16.479 CC lib/nvme/nvme.o 00:03:16.479 CC lib/nvme/nvme_qpair.o 00:03:17.413 LIB libspdk_thread.a 00:03:17.413 SO libspdk_thread.so.11.0 00:03:17.413 SYMLINK libspdk_thread.so 00:03:17.413 CC lib/nvme/nvme_quirks.o 00:03:17.413 CC lib/nvme/nvme_transport.o 00:03:17.413 CC lib/nvme/nvme_discovery.o 00:03:17.413 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:17.413 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:17.413 CC lib/nvme/nvme_tcp.o 00:03:17.671 CC lib/nvme/nvme_opal.o 00:03:17.671 CC lib/nvme/nvme_io_msg.o 00:03:17.928 CC lib/nvme/nvme_poll_group.o 00:03:17.928 CC lib/nvme/nvme_zns.o 00:03:17.928 CC lib/nvme/nvme_stubs.o 00:03:18.186 CC lib/nvme/nvme_auth.o 00:03:18.186 CC lib/accel/accel.o 00:03:18.186 CC lib/nvme/nvme_cuse.o 00:03:18.186 CC lib/nvme/nvme_rdma.o 00:03:18.444 CC lib/blob/blobstore.o 00:03:18.444 CC lib/blob/request.o 00:03:18.702 CC lib/blob/zeroes.o 00:03:18.702 CC lib/blob/blob_bs_dev.o 00:03:18.960 CC lib/init/json_config.o 00:03:18.960 CC lib/virtio/virtio.o 00:03:18.960 CC lib/accel/accel_rpc.o 00:03:19.273 CC lib/accel/accel_sw.o 00:03:19.273 CC lib/fsdev/fsdev.o 00:03:19.273 CC lib/fsdev/fsdev_io.o 00:03:19.273 CC lib/init/subsystem.o 00:03:19.273 CC lib/fsdev/fsdev_rpc.o 00:03:19.273 CC lib/virtio/virtio_vhost_user.o 00:03:19.273 CC lib/virtio/virtio_vfio_user.o 00:03:19.273 CC lib/virtio/virtio_pci.o 00:03:19.273 CC lib/init/subsystem_rpc.o 00:03:19.532 CC lib/init/rpc.o 00:03:19.532 LIB libspdk_accel.a 00:03:19.532 SO libspdk_accel.so.16.0 00:03:19.532 SYMLINK libspdk_accel.so 00:03:19.532 LIB libspdk_init.a 00:03:19.532 SO libspdk_init.so.6.0 00:03:19.532 LIB libspdk_nvme.a 00:03:19.790 LIB libspdk_virtio.a 00:03:19.790 SYMLINK libspdk_init.so 00:03:19.790 SO libspdk_virtio.so.7.0 00:03:19.790 LIB libspdk_fsdev.a 00:03:19.790 CC lib/bdev/bdev_rpc.o 00:03:19.790 CC lib/bdev/bdev.o 00:03:19.790 CC lib/bdev/part.o 00:03:19.790 CC lib/bdev/scsi_nvme.o 00:03:19.790 CC lib/bdev/bdev_zone.o 00:03:19.790 SYMLINK libspdk_virtio.so 00:03:19.790 SO libspdk_fsdev.so.2.0 00:03:19.790 SO libspdk_nvme.so.15.0 00:03:19.790 SYMLINK libspdk_fsdev.so 00:03:20.049 CC lib/event/app.o 00:03:20.049 CC lib/event/reactor.o 00:03:20.049 CC lib/event/log_rpc.o 00:03:20.049 CC lib/event/app_rpc.o 00:03:20.049 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:20.049 CC lib/event/scheduler_static.o 00:03:20.049 SYMLINK libspdk_nvme.so 00:03:20.307 LIB libspdk_event.a 00:03:20.307 SO libspdk_event.so.14.0 00:03:20.566 SYMLINK libspdk_event.so 00:03:20.825 LIB libspdk_fuse_dispatcher.a 00:03:20.825 SO libspdk_fuse_dispatcher.so.1.0 00:03:20.825 SYMLINK libspdk_fuse_dispatcher.so 00:03:21.393 LIB libspdk_blob.a 00:03:21.393 SO libspdk_blob.so.11.0 00:03:21.651 SYMLINK libspdk_blob.so 00:03:21.910 CC lib/lvol/lvol.o 00:03:21.910 CC lib/blobfs/blobfs.o 00:03:21.910 CC lib/blobfs/tree.o 00:03:22.478 LIB libspdk_bdev.a 00:03:22.478 SO libspdk_bdev.so.17.0 00:03:22.737 SYMLINK libspdk_bdev.so 00:03:22.737 LIB libspdk_blobfs.a 00:03:22.737 SO libspdk_blobfs.so.10.0 00:03:22.737 LIB libspdk_lvol.a 00:03:22.737 SYMLINK libspdk_blobfs.so 00:03:22.737 SO libspdk_lvol.so.10.0 00:03:22.737 CC lib/nvmf/ctrlr.o 00:03:22.737 CC lib/nvmf/ctrlr_discovery.o 00:03:22.737 CC lib/nvmf/ctrlr_bdev.o 00:03:22.737 CC lib/nvmf/subsystem.o 00:03:22.737 CC lib/nvmf/nvmf.o 00:03:22.737 CC lib/scsi/dev.o 00:03:22.737 CC lib/nbd/nbd.o 00:03:22.737 CC lib/ublk/ublk.o 00:03:22.737 CC lib/ftl/ftl_core.o 00:03:22.995 SYMLINK libspdk_lvol.so 00:03:22.995 CC lib/ftl/ftl_init.o 00:03:22.995 CC lib/scsi/lun.o 00:03:23.253 CC lib/scsi/port.o 00:03:23.253 CC lib/nbd/nbd_rpc.o 00:03:23.253 CC lib/ftl/ftl_layout.o 00:03:23.253 CC lib/nvmf/nvmf_rpc.o 00:03:23.253 CC lib/ublk/ublk_rpc.o 00:03:23.511 CC lib/scsi/scsi.o 00:03:23.511 LIB libspdk_nbd.a 00:03:23.511 SO libspdk_nbd.so.7.0 00:03:23.511 CC lib/ftl/ftl_debug.o 00:03:23.511 SYMLINK libspdk_nbd.so 00:03:23.511 CC lib/scsi/scsi_bdev.o 00:03:23.511 CC lib/scsi/scsi_pr.o 00:03:23.511 CC lib/nvmf/transport.o 00:03:23.511 LIB libspdk_ublk.a 00:03:23.769 SO libspdk_ublk.so.3.0 00:03:23.769 CC lib/ftl/ftl_io.o 00:03:23.769 SYMLINK libspdk_ublk.so 00:03:23.769 CC lib/ftl/ftl_sb.o 00:03:23.769 CC lib/ftl/ftl_l2p.o 00:03:23.769 CC lib/scsi/scsi_rpc.o 00:03:24.027 CC lib/scsi/task.o 00:03:24.027 CC lib/nvmf/tcp.o 00:03:24.027 CC lib/nvmf/stubs.o 00:03:24.027 CC lib/ftl/ftl_l2p_flat.o 00:03:24.027 CC lib/ftl/ftl_nv_cache.o 00:03:24.027 CC lib/ftl/ftl_band.o 00:03:24.028 LIB libspdk_scsi.a 00:03:24.286 CC lib/nvmf/mdns_server.o 00:03:24.286 CC lib/nvmf/rdma.o 00:03:24.286 CC lib/nvmf/auth.o 00:03:24.286 SO libspdk_scsi.so.9.0 00:03:24.286 CC lib/ftl/ftl_band_ops.o 00:03:24.286 SYMLINK libspdk_scsi.so 00:03:24.286 CC lib/ftl/ftl_writer.o 00:03:24.286 CC lib/ftl/ftl_rq.o 00:03:24.544 CC lib/ftl/ftl_reloc.o 00:03:24.544 CC lib/ftl/ftl_l2p_cache.o 00:03:24.544 CC lib/ftl/ftl_p2l.o 00:03:24.544 CC lib/iscsi/conn.o 00:03:24.544 CC lib/ftl/ftl_p2l_log.o 00:03:24.802 CC lib/vhost/vhost.o 00:03:25.060 CC lib/vhost/vhost_rpc.o 00:03:25.060 CC lib/iscsi/init_grp.o 00:03:25.060 CC lib/iscsi/iscsi.o 00:03:25.060 CC lib/iscsi/param.o 00:03:25.060 CC lib/iscsi/portal_grp.o 00:03:25.060 CC lib/ftl/mngt/ftl_mngt.o 00:03:25.319 CC lib/iscsi/tgt_node.o 00:03:25.319 CC lib/iscsi/iscsi_subsystem.o 00:03:25.319 CC lib/iscsi/iscsi_rpc.o 00:03:25.319 CC lib/iscsi/task.o 00:03:25.577 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:25.577 CC lib/vhost/vhost_scsi.o 00:03:25.577 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:25.577 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:25.577 CC lib/vhost/vhost_blk.o 00:03:25.835 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:25.835 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:25.835 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:25.835 CC lib/vhost/rte_vhost_user.o 00:03:25.835 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:25.835 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:26.093 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:26.093 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:26.093 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:26.093 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:26.351 CC lib/ftl/utils/ftl_conf.o 00:03:26.351 CC lib/ftl/utils/ftl_md.o 00:03:26.351 CC lib/ftl/utils/ftl_mempool.o 00:03:26.351 LIB libspdk_nvmf.a 00:03:26.351 CC lib/ftl/utils/ftl_bitmap.o 00:03:26.351 LIB libspdk_iscsi.a 00:03:26.351 SO libspdk_nvmf.so.20.0 00:03:26.610 CC lib/ftl/utils/ftl_property.o 00:03:26.610 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:26.610 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:26.610 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:26.610 SO libspdk_iscsi.so.8.0 00:03:26.610 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:26.610 SYMLINK libspdk_nvmf.so 00:03:26.610 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:26.610 SYMLINK libspdk_iscsi.so 00:03:26.610 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:26.610 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:26.610 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:26.868 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:26.868 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:26.868 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:26.868 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:26.868 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:26.868 CC lib/ftl/base/ftl_base_dev.o 00:03:26.868 CC lib/ftl/base/ftl_base_bdev.o 00:03:26.868 CC lib/ftl/ftl_trace.o 00:03:27.126 LIB libspdk_vhost.a 00:03:27.126 SO libspdk_vhost.so.8.0 00:03:27.126 LIB libspdk_ftl.a 00:03:27.384 SYMLINK libspdk_vhost.so 00:03:27.384 SO libspdk_ftl.so.9.0 00:03:27.642 SYMLINK libspdk_ftl.so 00:03:28.208 CC module/env_dpdk/env_dpdk_rpc.o 00:03:28.208 CC module/accel/dsa/accel_dsa.o 00:03:28.208 CC module/accel/iaa/accel_iaa.o 00:03:28.208 CC module/blob/bdev/blob_bdev.o 00:03:28.208 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:28.208 CC module/keyring/file/keyring.o 00:03:28.208 CC module/accel/ioat/accel_ioat.o 00:03:28.208 CC module/accel/error/accel_error.o 00:03:28.208 CC module/fsdev/aio/fsdev_aio.o 00:03:28.208 CC module/sock/posix/posix.o 00:03:28.208 LIB libspdk_env_dpdk_rpc.a 00:03:28.208 SO libspdk_env_dpdk_rpc.so.6.0 00:03:28.208 SYMLINK libspdk_env_dpdk_rpc.so 00:03:28.208 CC module/accel/error/accel_error_rpc.o 00:03:28.208 CC module/keyring/file/keyring_rpc.o 00:03:28.466 CC module/accel/ioat/accel_ioat_rpc.o 00:03:28.466 CC module/accel/iaa/accel_iaa_rpc.o 00:03:28.466 LIB libspdk_scheduler_dynamic.a 00:03:28.466 SO libspdk_scheduler_dynamic.so.4.0 00:03:28.466 LIB libspdk_blob_bdev.a 00:03:28.466 CC module/accel/dsa/accel_dsa_rpc.o 00:03:28.467 SO libspdk_blob_bdev.so.11.0 00:03:28.467 LIB libspdk_accel_error.a 00:03:28.467 SYMLINK libspdk_scheduler_dynamic.so 00:03:28.467 LIB libspdk_keyring_file.a 00:03:28.467 LIB libspdk_accel_ioat.a 00:03:28.467 SO libspdk_accel_error.so.2.0 00:03:28.467 SO libspdk_keyring_file.so.2.0 00:03:28.467 LIB libspdk_accel_iaa.a 00:03:28.467 SYMLINK libspdk_blob_bdev.so 00:03:28.467 CC module/keyring/linux/keyring.o 00:03:28.467 SO libspdk_accel_ioat.so.6.0 00:03:28.467 SO libspdk_accel_iaa.so.3.0 00:03:28.467 SYMLINK libspdk_accel_error.so 00:03:28.725 CC module/keyring/linux/keyring_rpc.o 00:03:28.725 SYMLINK libspdk_keyring_file.so 00:03:28.725 SYMLINK libspdk_accel_ioat.so 00:03:28.725 SYMLINK libspdk_accel_iaa.so 00:03:28.725 LIB libspdk_accel_dsa.a 00:03:28.725 SO libspdk_accel_dsa.so.5.0 00:03:28.725 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:28.725 SYMLINK libspdk_accel_dsa.so 00:03:28.725 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:28.725 LIB libspdk_keyring_linux.a 00:03:28.725 SO libspdk_keyring_linux.so.1.0 00:03:28.725 CC module/scheduler/gscheduler/gscheduler.o 00:03:28.725 CC module/sock/uring/uring.o 00:03:28.983 SYMLINK libspdk_keyring_linux.so 00:03:28.983 CC module/fsdev/aio/linux_aio_mgr.o 00:03:28.983 LIB libspdk_scheduler_dpdk_governor.a 00:03:28.983 CC module/blobfs/bdev/blobfs_bdev.o 00:03:28.983 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:28.983 CC module/bdev/delay/vbdev_delay.o 00:03:28.983 CC module/bdev/error/vbdev_error.o 00:03:28.983 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:28.983 LIB libspdk_sock_posix.a 00:03:28.983 LIB libspdk_scheduler_gscheduler.a 00:03:28.983 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:28.983 CC module/bdev/error/vbdev_error_rpc.o 00:03:28.983 SO libspdk_sock_posix.so.6.0 00:03:28.983 SO libspdk_scheduler_gscheduler.so.4.0 00:03:28.983 CC module/bdev/gpt/gpt.o 00:03:28.983 SYMLINK libspdk_scheduler_gscheduler.so 00:03:28.983 LIB libspdk_fsdev_aio.a 00:03:28.983 CC module/bdev/gpt/vbdev_gpt.o 00:03:28.983 SYMLINK libspdk_sock_posix.so 00:03:28.983 SO libspdk_fsdev_aio.so.1.0 00:03:29.242 LIB libspdk_blobfs_bdev.a 00:03:29.242 SO libspdk_blobfs_bdev.so.6.0 00:03:29.242 SYMLINK libspdk_fsdev_aio.so 00:03:29.242 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:29.242 SYMLINK libspdk_blobfs_bdev.so 00:03:29.242 LIB libspdk_bdev_error.a 00:03:29.242 SO libspdk_bdev_error.so.6.0 00:03:29.242 CC module/bdev/lvol/vbdev_lvol.o 00:03:29.242 CC module/bdev/malloc/bdev_malloc.o 00:03:29.242 SYMLINK libspdk_bdev_error.so 00:03:29.242 LIB libspdk_bdev_gpt.a 00:03:29.500 CC module/bdev/null/bdev_null.o 00:03:29.500 CC module/bdev/nvme/bdev_nvme.o 00:03:29.500 SO libspdk_bdev_gpt.so.6.0 00:03:29.500 LIB libspdk_bdev_delay.a 00:03:29.500 CC module/bdev/passthru/vbdev_passthru.o 00:03:29.500 SO libspdk_bdev_delay.so.6.0 00:03:29.500 SYMLINK libspdk_bdev_gpt.so 00:03:29.500 CC module/bdev/raid/bdev_raid.o 00:03:29.500 SYMLINK libspdk_bdev_delay.so 00:03:29.500 CC module/bdev/raid/bdev_raid_rpc.o 00:03:29.500 CC module/bdev/split/vbdev_split.o 00:03:29.500 LIB libspdk_sock_uring.a 00:03:29.500 SO libspdk_sock_uring.so.5.0 00:03:29.758 SYMLINK libspdk_sock_uring.so 00:03:29.758 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:29.758 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:29.758 CC module/bdev/null/bdev_null_rpc.o 00:03:29.758 CC module/bdev/split/vbdev_split_rpc.o 00:03:29.758 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:29.758 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:29.758 LIB libspdk_bdev_malloc.a 00:03:29.758 LIB libspdk_bdev_null.a 00:03:29.758 SO libspdk_bdev_malloc.so.6.0 00:03:30.016 LIB libspdk_bdev_split.a 00:03:30.016 SO libspdk_bdev_null.so.6.0 00:03:30.016 LIB libspdk_bdev_passthru.a 00:03:30.016 SO libspdk_bdev_split.so.6.0 00:03:30.016 SYMLINK libspdk_bdev_malloc.so 00:03:30.016 SO libspdk_bdev_passthru.so.6.0 00:03:30.016 CC module/bdev/uring/bdev_uring.o 00:03:30.016 SYMLINK libspdk_bdev_null.so 00:03:30.016 CC module/bdev/aio/bdev_aio.o 00:03:30.016 SYMLINK libspdk_bdev_split.so 00:03:30.016 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:30.016 CC module/bdev/aio/bdev_aio_rpc.o 00:03:30.016 SYMLINK libspdk_bdev_passthru.so 00:03:30.016 CC module/bdev/ftl/bdev_ftl.o 00:03:30.016 LIB libspdk_bdev_lvol.a 00:03:30.275 LIB libspdk_bdev_zone_block.a 00:03:30.275 SO libspdk_bdev_lvol.so.6.0 00:03:30.275 CC module/bdev/iscsi/bdev_iscsi.o 00:03:30.275 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:30.275 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:30.275 SO libspdk_bdev_zone_block.so.6.0 00:03:30.275 SYMLINK libspdk_bdev_lvol.so 00:03:30.275 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:30.275 SYMLINK libspdk_bdev_zone_block.so 00:03:30.275 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:30.275 CC module/bdev/raid/bdev_raid_sb.o 00:03:30.533 CC module/bdev/uring/bdev_uring_rpc.o 00:03:30.533 LIB libspdk_bdev_aio.a 00:03:30.533 CC module/bdev/nvme/nvme_rpc.o 00:03:30.533 SO libspdk_bdev_aio.so.6.0 00:03:30.533 LIB libspdk_bdev_ftl.a 00:03:30.533 SYMLINK libspdk_bdev_aio.so 00:03:30.533 CC module/bdev/nvme/bdev_mdns_client.o 00:03:30.533 CC module/bdev/raid/raid0.o 00:03:30.533 SO libspdk_bdev_ftl.so.6.0 00:03:30.533 LIB libspdk_bdev_iscsi.a 00:03:30.533 SO libspdk_bdev_iscsi.so.6.0 00:03:30.533 SYMLINK libspdk_bdev_ftl.so 00:03:30.533 LIB libspdk_bdev_uring.a 00:03:30.792 CC module/bdev/raid/raid1.o 00:03:30.792 SYMLINK libspdk_bdev_iscsi.so 00:03:30.792 CC module/bdev/nvme/vbdev_opal.o 00:03:30.792 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:30.792 SO libspdk_bdev_uring.so.6.0 00:03:30.792 CC module/bdev/raid/concat.o 00:03:30.792 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:30.792 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:30.792 SYMLINK libspdk_bdev_uring.so 00:03:30.792 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:31.050 LIB libspdk_bdev_raid.a 00:03:31.050 LIB libspdk_bdev_virtio.a 00:03:31.050 SO libspdk_bdev_raid.so.6.0 00:03:31.050 SO libspdk_bdev_virtio.so.6.0 00:03:31.050 SYMLINK libspdk_bdev_virtio.so 00:03:31.050 SYMLINK libspdk_bdev_raid.so 00:03:31.984 LIB libspdk_bdev_nvme.a 00:03:31.984 SO libspdk_bdev_nvme.so.7.1 00:03:32.245 SYMLINK libspdk_bdev_nvme.so 00:03:32.812 CC module/event/subsystems/keyring/keyring.o 00:03:32.812 CC module/event/subsystems/iobuf/iobuf.o 00:03:32.812 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:32.812 CC module/event/subsystems/vmd/vmd.o 00:03:32.812 CC module/event/subsystems/fsdev/fsdev.o 00:03:32.812 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:32.812 CC module/event/subsystems/sock/sock.o 00:03:32.812 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:32.812 CC module/event/subsystems/scheduler/scheduler.o 00:03:32.812 LIB libspdk_event_keyring.a 00:03:32.812 LIB libspdk_event_sock.a 00:03:32.812 SO libspdk_event_keyring.so.1.0 00:03:32.812 LIB libspdk_event_fsdev.a 00:03:32.812 LIB libspdk_event_vhost_blk.a 00:03:32.812 LIB libspdk_event_vmd.a 00:03:32.812 LIB libspdk_event_scheduler.a 00:03:32.812 SO libspdk_event_sock.so.5.0 00:03:32.812 LIB libspdk_event_iobuf.a 00:03:32.812 SO libspdk_event_fsdev.so.1.0 00:03:32.812 SO libspdk_event_vhost_blk.so.3.0 00:03:32.812 SO libspdk_event_vmd.so.6.0 00:03:32.812 SO libspdk_event_scheduler.so.4.0 00:03:32.812 SO libspdk_event_iobuf.so.3.0 00:03:32.812 SYMLINK libspdk_event_keyring.so 00:03:33.071 SYMLINK libspdk_event_sock.so 00:03:33.071 SYMLINK libspdk_event_fsdev.so 00:03:33.071 SYMLINK libspdk_event_vhost_blk.so 00:03:33.071 SYMLINK libspdk_event_scheduler.so 00:03:33.071 SYMLINK libspdk_event_vmd.so 00:03:33.071 SYMLINK libspdk_event_iobuf.so 00:03:33.330 CC module/event/subsystems/accel/accel.o 00:03:33.330 LIB libspdk_event_accel.a 00:03:33.330 SO libspdk_event_accel.so.6.0 00:03:33.588 SYMLINK libspdk_event_accel.so 00:03:33.847 CC module/event/subsystems/bdev/bdev.o 00:03:33.847 LIB libspdk_event_bdev.a 00:03:34.105 SO libspdk_event_bdev.so.6.0 00:03:34.105 SYMLINK libspdk_event_bdev.so 00:03:34.363 CC module/event/subsystems/scsi/scsi.o 00:03:34.363 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:34.363 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:34.363 CC module/event/subsystems/ublk/ublk.o 00:03:34.363 CC module/event/subsystems/nbd/nbd.o 00:03:34.363 LIB libspdk_event_ublk.a 00:03:34.363 LIB libspdk_event_nbd.a 00:03:34.363 LIB libspdk_event_scsi.a 00:03:34.363 SO libspdk_event_nbd.so.6.0 00:03:34.363 SO libspdk_event_ublk.so.3.0 00:03:34.622 SO libspdk_event_scsi.so.6.0 00:03:34.622 SYMLINK libspdk_event_nbd.so 00:03:34.622 SYMLINK libspdk_event_ublk.so 00:03:34.622 SYMLINK libspdk_event_scsi.so 00:03:34.622 LIB libspdk_event_nvmf.a 00:03:34.622 SO libspdk_event_nvmf.so.6.0 00:03:34.622 SYMLINK libspdk_event_nvmf.so 00:03:34.880 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:34.880 CC module/event/subsystems/iscsi/iscsi.o 00:03:34.880 LIB libspdk_event_vhost_scsi.a 00:03:34.880 LIB libspdk_event_iscsi.a 00:03:35.139 SO libspdk_event_vhost_scsi.so.3.0 00:03:35.139 SO libspdk_event_iscsi.so.6.0 00:03:35.139 SYMLINK libspdk_event_vhost_scsi.so 00:03:35.139 SYMLINK libspdk_event_iscsi.so 00:03:35.139 SO libspdk.so.6.0 00:03:35.139 SYMLINK libspdk.so 00:03:35.397 CC app/trace_record/trace_record.o 00:03:35.397 CC app/spdk_nvme_perf/perf.o 00:03:35.397 CXX app/trace/trace.o 00:03:35.397 CC app/spdk_lspci/spdk_lspci.o 00:03:35.656 CC app/nvmf_tgt/nvmf_main.o 00:03:35.656 CC app/iscsi_tgt/iscsi_tgt.o 00:03:35.656 CC app/spdk_tgt/spdk_tgt.o 00:03:35.656 CC examples/util/zipf/zipf.o 00:03:35.656 CC test/thread/poller_perf/poller_perf.o 00:03:35.656 CC test/dma/test_dma/test_dma.o 00:03:35.656 LINK spdk_lspci 00:03:35.915 LINK spdk_trace_record 00:03:35.915 LINK zipf 00:03:35.915 LINK poller_perf 00:03:35.915 LINK nvmf_tgt 00:03:35.915 LINK spdk_tgt 00:03:35.915 LINK iscsi_tgt 00:03:35.915 LINK spdk_trace 00:03:35.915 CC app/spdk_nvme_identify/identify.o 00:03:36.174 CC examples/ioat/perf/perf.o 00:03:36.174 CC examples/vmd/lsvmd/lsvmd.o 00:03:36.174 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:36.174 CC examples/idxd/perf/perf.o 00:03:36.174 LINK test_dma 00:03:36.174 CC examples/vmd/led/led.o 00:03:36.174 CC examples/thread/thread/thread_ex.o 00:03:36.432 CC examples/sock/hello_world/hello_sock.o 00:03:36.432 LINK lsvmd 00:03:36.432 LINK interrupt_tgt 00:03:36.432 LINK ioat_perf 00:03:36.432 LINK led 00:03:36.432 LINK spdk_nvme_perf 00:03:36.691 LINK idxd_perf 00:03:36.691 LINK hello_sock 00:03:36.691 LINK thread 00:03:36.691 CC examples/ioat/verify/verify.o 00:03:36.691 TEST_HEADER include/spdk/accel.h 00:03:36.691 TEST_HEADER include/spdk/accel_module.h 00:03:36.691 TEST_HEADER include/spdk/assert.h 00:03:36.691 TEST_HEADER include/spdk/barrier.h 00:03:36.691 TEST_HEADER include/spdk/base64.h 00:03:36.691 TEST_HEADER include/spdk/bdev.h 00:03:36.691 TEST_HEADER include/spdk/bdev_module.h 00:03:36.691 TEST_HEADER include/spdk/bdev_zone.h 00:03:36.691 TEST_HEADER include/spdk/bit_array.h 00:03:36.691 TEST_HEADER include/spdk/bit_pool.h 00:03:36.691 TEST_HEADER include/spdk/blob_bdev.h 00:03:36.691 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:36.691 TEST_HEADER include/spdk/blobfs.h 00:03:36.691 TEST_HEADER include/spdk/blob.h 00:03:36.691 TEST_HEADER include/spdk/conf.h 00:03:36.691 TEST_HEADER include/spdk/config.h 00:03:36.691 TEST_HEADER include/spdk/cpuset.h 00:03:36.691 CC test/app/bdev_svc/bdev_svc.o 00:03:36.691 TEST_HEADER include/spdk/crc16.h 00:03:36.691 TEST_HEADER include/spdk/crc32.h 00:03:36.691 TEST_HEADER include/spdk/crc64.h 00:03:36.691 TEST_HEADER include/spdk/dif.h 00:03:36.691 TEST_HEADER include/spdk/dma.h 00:03:36.691 TEST_HEADER include/spdk/endian.h 00:03:36.691 TEST_HEADER include/spdk/env_dpdk.h 00:03:36.691 TEST_HEADER include/spdk/env.h 00:03:36.691 TEST_HEADER include/spdk/event.h 00:03:36.691 TEST_HEADER include/spdk/fd_group.h 00:03:36.691 TEST_HEADER include/spdk/fd.h 00:03:36.691 TEST_HEADER include/spdk/file.h 00:03:36.691 TEST_HEADER include/spdk/fsdev.h 00:03:36.691 TEST_HEADER include/spdk/fsdev_module.h 00:03:36.691 CC test/blobfs/mkfs/mkfs.o 00:03:36.691 TEST_HEADER include/spdk/ftl.h 00:03:36.691 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:36.691 TEST_HEADER include/spdk/gpt_spec.h 00:03:36.691 TEST_HEADER include/spdk/hexlify.h 00:03:36.691 TEST_HEADER include/spdk/histogram_data.h 00:03:36.691 TEST_HEADER include/spdk/idxd.h 00:03:36.691 TEST_HEADER include/spdk/idxd_spec.h 00:03:36.691 TEST_HEADER include/spdk/init.h 00:03:36.691 TEST_HEADER include/spdk/ioat.h 00:03:36.691 TEST_HEADER include/spdk/ioat_spec.h 00:03:36.691 TEST_HEADER include/spdk/iscsi_spec.h 00:03:36.691 TEST_HEADER include/spdk/json.h 00:03:36.691 TEST_HEADER include/spdk/jsonrpc.h 00:03:36.691 TEST_HEADER include/spdk/keyring.h 00:03:36.691 TEST_HEADER include/spdk/keyring_module.h 00:03:36.691 TEST_HEADER include/spdk/likely.h 00:03:36.691 TEST_HEADER include/spdk/log.h 00:03:36.691 TEST_HEADER include/spdk/lvol.h 00:03:36.691 TEST_HEADER include/spdk/md5.h 00:03:36.691 CC app/spdk_nvme_discover/discovery_aer.o 00:03:36.691 TEST_HEADER include/spdk/memory.h 00:03:36.691 TEST_HEADER include/spdk/mmio.h 00:03:36.691 TEST_HEADER include/spdk/nbd.h 00:03:36.691 TEST_HEADER include/spdk/net.h 00:03:36.692 TEST_HEADER include/spdk/notify.h 00:03:36.692 TEST_HEADER include/spdk/nvme.h 00:03:36.692 TEST_HEADER include/spdk/nvme_intel.h 00:03:36.692 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:36.692 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:36.692 TEST_HEADER include/spdk/nvme_spec.h 00:03:36.692 TEST_HEADER include/spdk/nvme_zns.h 00:03:36.692 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:36.692 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:36.692 TEST_HEADER include/spdk/nvmf.h 00:03:36.950 TEST_HEADER include/spdk/nvmf_spec.h 00:03:36.950 TEST_HEADER include/spdk/nvmf_transport.h 00:03:36.950 TEST_HEADER include/spdk/opal.h 00:03:36.950 TEST_HEADER include/spdk/opal_spec.h 00:03:36.950 CC app/spdk_top/spdk_top.o 00:03:36.950 TEST_HEADER include/spdk/pci_ids.h 00:03:36.950 TEST_HEADER include/spdk/pipe.h 00:03:36.950 TEST_HEADER include/spdk/queue.h 00:03:36.950 TEST_HEADER include/spdk/reduce.h 00:03:36.950 TEST_HEADER include/spdk/rpc.h 00:03:36.950 TEST_HEADER include/spdk/scheduler.h 00:03:36.950 TEST_HEADER include/spdk/scsi.h 00:03:36.950 TEST_HEADER include/spdk/scsi_spec.h 00:03:36.950 TEST_HEADER include/spdk/sock.h 00:03:36.950 TEST_HEADER include/spdk/stdinc.h 00:03:36.950 TEST_HEADER include/spdk/string.h 00:03:36.950 TEST_HEADER include/spdk/thread.h 00:03:36.950 TEST_HEADER include/spdk/trace.h 00:03:36.950 TEST_HEADER include/spdk/trace_parser.h 00:03:36.950 TEST_HEADER include/spdk/tree.h 00:03:36.950 TEST_HEADER include/spdk/ublk.h 00:03:36.950 TEST_HEADER include/spdk/util.h 00:03:36.950 TEST_HEADER include/spdk/uuid.h 00:03:36.950 LINK spdk_nvme_identify 00:03:36.950 TEST_HEADER include/spdk/version.h 00:03:36.950 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:36.950 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:36.950 TEST_HEADER include/spdk/vhost.h 00:03:36.950 CC test/env/mem_callbacks/mem_callbacks.o 00:03:36.950 TEST_HEADER include/spdk/vmd.h 00:03:36.950 TEST_HEADER include/spdk/xor.h 00:03:36.950 TEST_HEADER include/spdk/zipf.h 00:03:36.950 CXX test/cpp_headers/accel.o 00:03:36.950 CC test/event/event_perf/event_perf.o 00:03:36.950 LINK verify 00:03:36.950 LINK bdev_svc 00:03:36.950 LINK mkfs 00:03:36.950 CC examples/nvme/hello_world/hello_world.o 00:03:36.950 LINK spdk_nvme_discover 00:03:37.209 LINK event_perf 00:03:37.209 CXX test/cpp_headers/accel_module.o 00:03:37.209 CC examples/nvme/reconnect/reconnect.o 00:03:37.209 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:37.209 LINK hello_world 00:03:37.209 CXX test/cpp_headers/assert.o 00:03:37.209 CC examples/nvme/arbitration/arbitration.o 00:03:37.209 CC test/env/vtophys/vtophys.o 00:03:37.467 CC test/event/reactor/reactor.o 00:03:37.467 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:37.467 LINK vtophys 00:03:37.467 CXX test/cpp_headers/barrier.o 00:03:37.467 LINK reactor 00:03:37.467 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:37.467 LINK mem_callbacks 00:03:37.467 LINK reconnect 00:03:37.724 CXX test/cpp_headers/base64.o 00:03:37.724 LINK arbitration 00:03:37.724 LINK env_dpdk_post_init 00:03:37.724 LINK nvme_manage 00:03:37.724 LINK spdk_top 00:03:37.724 CC test/event/reactor_perf/reactor_perf.o 00:03:37.724 LINK nvme_fuzz 00:03:37.724 CC test/event/app_repeat/app_repeat.o 00:03:37.981 CC test/app/histogram_perf/histogram_perf.o 00:03:37.981 CXX test/cpp_headers/bdev.o 00:03:37.981 CC test/lvol/esnap/esnap.o 00:03:37.981 LINK reactor_perf 00:03:37.981 CC test/app/jsoncat/jsoncat.o 00:03:37.981 LINK app_repeat 00:03:37.981 CC test/env/memory/memory_ut.o 00:03:37.981 CC examples/nvme/hotplug/hotplug.o 00:03:37.981 LINK histogram_perf 00:03:37.981 CC app/vhost/vhost.o 00:03:37.981 CXX test/cpp_headers/bdev_module.o 00:03:38.239 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:38.239 LINK jsoncat 00:03:38.239 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:38.239 LINK vhost 00:03:38.239 CXX test/cpp_headers/bdev_zone.o 00:03:38.497 LINK hotplug 00:03:38.497 CC test/event/scheduler/scheduler.o 00:03:38.497 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:38.497 LINK cmb_copy 00:03:38.497 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:38.497 CXX test/cpp_headers/bit_array.o 00:03:38.755 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:38.755 LINK scheduler 00:03:38.755 CC app/spdk_dd/spdk_dd.o 00:03:38.755 CXX test/cpp_headers/bit_pool.o 00:03:38.755 CC examples/nvme/abort/abort.o 00:03:38.755 CC app/fio/nvme/fio_plugin.o 00:03:38.755 LINK hello_fsdev 00:03:39.014 CXX test/cpp_headers/blob_bdev.o 00:03:39.014 LINK vhost_fuzz 00:03:39.014 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:39.272 CXX test/cpp_headers/blobfs_bdev.o 00:03:39.272 CC test/app/stub/stub.o 00:03:39.272 LINK abort 00:03:39.272 LINK spdk_dd 00:03:39.272 CXX test/cpp_headers/blobfs.o 00:03:39.272 LINK memory_ut 00:03:39.272 LINK pmr_persistence 00:03:39.530 CXX test/cpp_headers/blob.o 00:03:39.530 LINK stub 00:03:39.530 LINK spdk_nvme 00:03:39.530 CXX test/cpp_headers/conf.o 00:03:39.530 CC test/env/pci/pci_ut.o 00:03:39.530 CXX test/cpp_headers/config.o 00:03:39.530 CXX test/cpp_headers/cpuset.o 00:03:39.530 CC app/fio/bdev/fio_plugin.o 00:03:39.530 CXX test/cpp_headers/crc16.o 00:03:39.788 CC examples/accel/perf/accel_perf.o 00:03:39.788 CC test/rpc_client/rpc_client_test.o 00:03:39.788 CC test/nvme/aer/aer.o 00:03:39.788 CXX test/cpp_headers/crc32.o 00:03:39.788 CC examples/blob/hello_world/hello_blob.o 00:03:39.788 CC examples/blob/cli/blobcli.o 00:03:39.788 LINK iscsi_fuzz 00:03:40.046 LINK pci_ut 00:03:40.046 LINK rpc_client_test 00:03:40.046 CXX test/cpp_headers/crc64.o 00:03:40.046 LINK aer 00:03:40.046 LINK hello_blob 00:03:40.046 LINK spdk_bdev 00:03:40.304 CXX test/cpp_headers/dif.o 00:03:40.304 CC test/nvme/reset/reset.o 00:03:40.304 LINK accel_perf 00:03:40.304 CC test/nvme/sgl/sgl.o 00:03:40.304 CC test/accel/dif/dif.o 00:03:40.304 CC test/nvme/e2edp/nvme_dp.o 00:03:40.304 CXX test/cpp_headers/dma.o 00:03:40.304 CC test/nvme/overhead/overhead.o 00:03:40.304 CXX test/cpp_headers/endian.o 00:03:40.304 LINK blobcli 00:03:40.563 CC test/nvme/err_injection/err_injection.o 00:03:40.563 LINK reset 00:03:40.563 CXX test/cpp_headers/env_dpdk.o 00:03:40.563 LINK sgl 00:03:40.563 LINK nvme_dp 00:03:40.563 CC test/nvme/startup/startup.o 00:03:40.821 LINK err_injection 00:03:40.821 LINK overhead 00:03:40.821 CXX test/cpp_headers/env.o 00:03:40.821 CC test/nvme/reserve/reserve.o 00:03:40.821 CXX test/cpp_headers/event.o 00:03:40.821 LINK startup 00:03:40.821 CC examples/bdev/hello_world/hello_bdev.o 00:03:40.821 CC test/nvme/simple_copy/simple_copy.o 00:03:41.079 CC test/nvme/connect_stress/connect_stress.o 00:03:41.079 LINK dif 00:03:41.079 CC test/nvme/boot_partition/boot_partition.o 00:03:41.079 LINK reserve 00:03:41.079 CXX test/cpp_headers/fd_group.o 00:03:41.079 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.079 CC test/nvme/compliance/nvme_compliance.o 00:03:41.079 LINK hello_bdev 00:03:41.079 LINK simple_copy 00:03:41.337 LINK connect_stress 00:03:41.337 LINK boot_partition 00:03:41.337 CXX test/cpp_headers/fd.o 00:03:41.337 CC test/nvme/fused_ordering/fused_ordering.o 00:03:41.337 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:41.337 CXX test/cpp_headers/file.o 00:03:41.337 CXX test/cpp_headers/fsdev.o 00:03:41.595 CC test/nvme/fdp/fdp.o 00:03:41.595 LINK nvme_compliance 00:03:41.595 CC test/nvme/cuse/cuse.o 00:03:41.595 LINK fused_ordering 00:03:41.595 CXX test/cpp_headers/fsdev_module.o 00:03:41.595 CXX test/cpp_headers/ftl.o 00:03:41.595 LINK doorbell_aers 00:03:41.595 CC test/bdev/bdevio/bdevio.o 00:03:41.595 CXX test/cpp_headers/fuse_dispatcher.o 00:03:41.853 CXX test/cpp_headers/gpt_spec.o 00:03:41.853 CXX test/cpp_headers/hexlify.o 00:03:41.853 CXX test/cpp_headers/histogram_data.o 00:03:41.853 CXX test/cpp_headers/idxd.o 00:03:41.853 LINK fdp 00:03:41.853 CXX test/cpp_headers/idxd_spec.o 00:03:41.853 CXX test/cpp_headers/init.o 00:03:42.111 CXX test/cpp_headers/ioat.o 00:03:42.111 CXX test/cpp_headers/ioat_spec.o 00:03:42.111 LINK bdevperf 00:03:42.111 CXX test/cpp_headers/iscsi_spec.o 00:03:42.111 CXX test/cpp_headers/json.o 00:03:42.111 CXX test/cpp_headers/jsonrpc.o 00:03:42.111 LINK bdevio 00:03:42.111 CXX test/cpp_headers/keyring.o 00:03:42.111 CXX test/cpp_headers/keyring_module.o 00:03:42.111 CXX test/cpp_headers/likely.o 00:03:42.111 CXX test/cpp_headers/log.o 00:03:42.111 CXX test/cpp_headers/lvol.o 00:03:42.369 CXX test/cpp_headers/md5.o 00:03:42.369 CXX test/cpp_headers/memory.o 00:03:42.369 CXX test/cpp_headers/mmio.o 00:03:42.369 CXX test/cpp_headers/nbd.o 00:03:42.369 CXX test/cpp_headers/net.o 00:03:42.369 CXX test/cpp_headers/notify.o 00:03:42.369 CXX test/cpp_headers/nvme.o 00:03:42.369 CXX test/cpp_headers/nvme_intel.o 00:03:42.651 CXX test/cpp_headers/nvme_ocssd.o 00:03:42.651 CC examples/nvmf/nvmf/nvmf.o 00:03:42.651 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:42.651 CXX test/cpp_headers/nvme_spec.o 00:03:42.651 CXX test/cpp_headers/nvme_zns.o 00:03:42.651 CXX test/cpp_headers/nvmf_cmd.o 00:03:42.651 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:42.651 CXX test/cpp_headers/nvmf.o 00:03:42.651 CXX test/cpp_headers/nvmf_spec.o 00:03:42.651 CXX test/cpp_headers/nvmf_transport.o 00:03:42.651 CXX test/cpp_headers/opal.o 00:03:42.910 CXX test/cpp_headers/opal_spec.o 00:03:42.910 CXX test/cpp_headers/pci_ids.o 00:03:42.910 CXX test/cpp_headers/pipe.o 00:03:42.910 CXX test/cpp_headers/queue.o 00:03:42.910 LINK nvmf 00:03:42.910 CXX test/cpp_headers/reduce.o 00:03:42.910 CXX test/cpp_headers/rpc.o 00:03:42.910 CXX test/cpp_headers/scheduler.o 00:03:42.910 CXX test/cpp_headers/scsi.o 00:03:42.910 CXX test/cpp_headers/scsi_spec.o 00:03:42.910 CXX test/cpp_headers/sock.o 00:03:42.910 CXX test/cpp_headers/stdinc.o 00:03:43.168 LINK cuse 00:03:43.168 CXX test/cpp_headers/string.o 00:03:43.168 CXX test/cpp_headers/thread.o 00:03:43.168 CXX test/cpp_headers/trace.o 00:03:43.168 CXX test/cpp_headers/trace_parser.o 00:03:43.168 CXX test/cpp_headers/tree.o 00:03:43.168 CXX test/cpp_headers/ublk.o 00:03:43.168 CXX test/cpp_headers/util.o 00:03:43.168 CXX test/cpp_headers/uuid.o 00:03:43.168 CXX test/cpp_headers/version.o 00:03:43.425 CXX test/cpp_headers/vfio_user_pci.o 00:03:43.425 CXX test/cpp_headers/vfio_user_spec.o 00:03:43.425 CXX test/cpp_headers/vhost.o 00:03:43.425 CXX test/cpp_headers/vmd.o 00:03:43.425 CXX test/cpp_headers/xor.o 00:03:43.425 CXX test/cpp_headers/zipf.o 00:03:43.683 LINK esnap 00:03:43.941 00:03:43.941 real 1m30.523s 00:03:43.941 user 8m31.227s 00:03:43.941 sys 1m32.609s 00:03:43.941 09:26:29 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:43.941 ************************************ 00:03:43.941 END TEST make 00:03:43.941 ************************************ 00:03:43.941 09:26:29 make -- common/autotest_common.sh@10 -- $ set +x 00:03:43.941 09:26:29 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:43.941 09:26:29 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:43.941 09:26:29 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:43.941 09:26:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.941 09:26:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:43.941 09:26:29 -- pm/common@44 -- $ pid=5298 00:03:43.941 09:26:29 -- pm/common@50 -- $ kill -TERM 5298 00:03:43.941 09:26:29 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:43.941 09:26:29 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:43.941 09:26:29 -- pm/common@44 -- $ pid=5300 00:03:43.941 09:26:29 -- pm/common@50 -- $ kill -TERM 5300 00:03:43.941 09:26:29 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:43.941 09:26:29 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:44.199 09:26:29 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:44.199 09:26:29 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:44.199 09:26:29 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:44.199 09:26:30 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:44.199 09:26:30 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:44.199 09:26:30 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:44.199 09:26:30 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:44.199 09:26:30 -- scripts/common.sh@336 -- # IFS=.-: 00:03:44.199 09:26:30 -- scripts/common.sh@336 -- # read -ra ver1 00:03:44.199 09:26:30 -- scripts/common.sh@337 -- # IFS=.-: 00:03:44.199 09:26:30 -- scripts/common.sh@337 -- # read -ra ver2 00:03:44.199 09:26:30 -- scripts/common.sh@338 -- # local 'op=<' 00:03:44.199 09:26:30 -- scripts/common.sh@340 -- # ver1_l=2 00:03:44.199 09:26:30 -- scripts/common.sh@341 -- # ver2_l=1 00:03:44.199 09:26:30 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:44.199 09:26:30 -- scripts/common.sh@344 -- # case "$op" in 00:03:44.199 09:26:30 -- scripts/common.sh@345 -- # : 1 00:03:44.199 09:26:30 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:44.199 09:26:30 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:44.199 09:26:30 -- scripts/common.sh@365 -- # decimal 1 00:03:44.199 09:26:30 -- scripts/common.sh@353 -- # local d=1 00:03:44.199 09:26:30 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:44.199 09:26:30 -- scripts/common.sh@355 -- # echo 1 00:03:44.199 09:26:30 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:44.199 09:26:30 -- scripts/common.sh@366 -- # decimal 2 00:03:44.199 09:26:30 -- scripts/common.sh@353 -- # local d=2 00:03:44.199 09:26:30 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:44.199 09:26:30 -- scripts/common.sh@355 -- # echo 2 00:03:44.199 09:26:30 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:44.199 09:26:30 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:44.199 09:26:30 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:44.199 09:26:30 -- scripts/common.sh@368 -- # return 0 00:03:44.199 09:26:30 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:44.199 09:26:30 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.199 --rc genhtml_branch_coverage=1 00:03:44.199 --rc genhtml_function_coverage=1 00:03:44.199 --rc genhtml_legend=1 00:03:44.199 --rc geninfo_all_blocks=1 00:03:44.199 --rc geninfo_unexecuted_blocks=1 00:03:44.199 00:03:44.199 ' 00:03:44.199 09:26:30 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.199 --rc genhtml_branch_coverage=1 00:03:44.199 --rc genhtml_function_coverage=1 00:03:44.199 --rc genhtml_legend=1 00:03:44.199 --rc geninfo_all_blocks=1 00:03:44.199 --rc geninfo_unexecuted_blocks=1 00:03:44.199 00:03:44.199 ' 00:03:44.199 09:26:30 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.199 --rc genhtml_branch_coverage=1 00:03:44.199 --rc genhtml_function_coverage=1 00:03:44.199 --rc genhtml_legend=1 00:03:44.199 --rc geninfo_all_blocks=1 00:03:44.199 --rc geninfo_unexecuted_blocks=1 00:03:44.199 00:03:44.199 ' 00:03:44.199 09:26:30 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:44.199 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:44.199 --rc genhtml_branch_coverage=1 00:03:44.199 --rc genhtml_function_coverage=1 00:03:44.199 --rc genhtml_legend=1 00:03:44.199 --rc geninfo_all_blocks=1 00:03:44.199 --rc geninfo_unexecuted_blocks=1 00:03:44.199 00:03:44.199 ' 00:03:44.199 09:26:30 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:44.199 09:26:30 -- nvmf/common.sh@7 -- # uname -s 00:03:44.199 09:26:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:44.199 09:26:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:44.199 09:26:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:44.199 09:26:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:44.199 09:26:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:44.199 09:26:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:44.199 09:26:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:44.199 09:26:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:44.199 09:26:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:44.199 09:26:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:44.199 09:26:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:03:44.199 09:26:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:03:44.199 09:26:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:44.199 09:26:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:44.199 09:26:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:03:44.199 09:26:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:44.199 09:26:30 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:44.199 09:26:30 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:44.199 09:26:30 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:44.199 09:26:30 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:44.199 09:26:30 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:44.199 09:26:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.200 09:26:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.200 09:26:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.200 09:26:30 -- paths/export.sh@5 -- # export PATH 00:03:44.200 09:26:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:44.200 09:26:30 -- nvmf/common.sh@51 -- # : 0 00:03:44.200 09:26:30 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:44.200 09:26:30 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:44.200 09:26:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:44.200 09:26:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:44.200 09:26:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:44.200 09:26:30 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:44.200 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:44.200 09:26:30 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:44.200 09:26:30 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:44.200 09:26:30 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:44.200 09:26:30 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:44.200 09:26:30 -- spdk/autotest.sh@32 -- # uname -s 00:03:44.200 09:26:30 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:44.200 09:26:30 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:44.200 09:26:30 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:44.200 09:26:30 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:44.200 09:26:30 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:44.200 09:26:30 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:44.200 09:26:30 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:44.200 09:26:30 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:44.200 09:26:30 -- spdk/autotest.sh@48 -- # udevadm_pid=54403 00:03:44.200 09:26:30 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:44.200 09:26:30 -- pm/common@17 -- # local monitor 00:03:44.200 09:26:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.200 09:26:30 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:44.200 09:26:30 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:44.200 09:26:30 -- pm/common@25 -- # sleep 1 00:03:44.200 09:26:30 -- pm/common@21 -- # date +%s 00:03:44.200 09:26:30 -- pm/common@21 -- # date +%s 00:03:44.200 09:26:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730798790 00:03:44.200 09:26:30 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730798790 00:03:44.457 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730798790_collect-vmstat.pm.log 00:03:44.457 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730798790_collect-cpu-load.pm.log 00:03:45.457 09:26:31 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:45.457 09:26:31 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:45.457 09:26:31 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:45.457 09:26:31 -- common/autotest_common.sh@10 -- # set +x 00:03:45.457 09:26:31 -- spdk/autotest.sh@59 -- # create_test_list 00:03:45.457 09:26:31 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:45.457 09:26:31 -- common/autotest_common.sh@10 -- # set +x 00:03:45.457 09:26:31 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:45.457 09:26:31 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:45.457 09:26:31 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:45.457 09:26:31 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:45.457 09:26:31 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:45.457 09:26:31 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:45.457 09:26:31 -- common/autotest_common.sh@1455 -- # uname 00:03:45.457 09:26:31 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:45.457 09:26:31 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:45.457 09:26:31 -- common/autotest_common.sh@1475 -- # uname 00:03:45.457 09:26:31 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:45.457 09:26:31 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:45.457 09:26:31 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:45.457 lcov: LCOV version 1.15 00:03:45.457 09:26:31 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:03.543 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:03.543 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:18.505 09:27:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:18.505 09:27:04 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:18.505 09:27:04 -- common/autotest_common.sh@10 -- # set +x 00:04:18.505 09:27:04 -- spdk/autotest.sh@78 -- # rm -f 00:04:18.505 09:27:04 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:18.764 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.022 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:19.022 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:19.022 09:27:04 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:19.022 09:27:04 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:19.022 09:27:04 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:19.022 09:27:04 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:19.022 09:27:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.022 09:27:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:19.022 09:27:04 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:19.022 09:27:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.022 09:27:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.022 09:27:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.022 09:27:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:19.022 09:27:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:19.022 09:27:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:19.022 09:27:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.022 09:27:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.022 09:27:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:19.022 09:27:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:19.022 09:27:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:19.022 09:27:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.022 09:27:04 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.022 09:27:04 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:19.022 09:27:04 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:19.022 09:27:04 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:19.023 09:27:04 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.023 09:27:04 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:19.023 09:27:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.023 09:27:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.023 09:27:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:19.023 09:27:04 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:19.023 09:27:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.023 No valid GPT data, bailing 00:04:19.023 09:27:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.023 09:27:04 -- scripts/common.sh@394 -- # pt= 00:04:19.023 09:27:04 -- scripts/common.sh@395 -- # return 1 00:04:19.023 09:27:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.023 1+0 records in 00:04:19.023 1+0 records out 00:04:19.023 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434861 s, 241 MB/s 00:04:19.023 09:27:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.023 09:27:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.023 09:27:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:19.023 09:27:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:19.023 09:27:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:19.023 No valid GPT data, bailing 00:04:19.023 09:27:04 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:19.281 09:27:04 -- scripts/common.sh@394 -- # pt= 00:04:19.281 09:27:04 -- scripts/common.sh@395 -- # return 1 00:04:19.281 09:27:04 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:19.281 1+0 records in 00:04:19.281 1+0 records out 00:04:19.281 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00410511 s, 255 MB/s 00:04:19.281 09:27:04 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.282 09:27:04 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.282 09:27:04 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:19.282 09:27:04 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:19.282 09:27:04 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:19.282 No valid GPT data, bailing 00:04:19.282 09:27:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:19.282 09:27:05 -- scripts/common.sh@394 -- # pt= 00:04:19.282 09:27:05 -- scripts/common.sh@395 -- # return 1 00:04:19.282 09:27:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:19.282 1+0 records in 00:04:19.282 1+0 records out 00:04:19.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00433297 s, 242 MB/s 00:04:19.282 09:27:05 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.282 09:27:05 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.282 09:27:05 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:19.282 09:27:05 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:19.282 09:27:05 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:19.282 No valid GPT data, bailing 00:04:19.282 09:27:05 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:19.282 09:27:05 -- scripts/common.sh@394 -- # pt= 00:04:19.282 09:27:05 -- scripts/common.sh@395 -- # return 1 00:04:19.282 09:27:05 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:19.282 1+0 records in 00:04:19.282 1+0 records out 00:04:19.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00372657 s, 281 MB/s 00:04:19.282 09:27:05 -- spdk/autotest.sh@105 -- # sync 00:04:19.282 09:27:05 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:19.282 09:27:05 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:19.282 09:27:05 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:21.185 09:27:07 -- spdk/autotest.sh@111 -- # uname -s 00:04:21.185 09:27:07 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:21.185 09:27:07 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:21.185 09:27:07 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.120 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.120 Hugepages 00:04:22.120 node hugesize free / total 00:04:22.120 node0 1048576kB 0 / 0 00:04:22.120 node0 2048kB 0 / 0 00:04:22.120 00:04:22.120 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.120 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:22.120 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:22.120 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:22.120 09:27:07 -- spdk/autotest.sh@117 -- # uname -s 00:04:22.120 09:27:07 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:22.120 09:27:07 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:22.120 09:27:07 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:22.686 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:22.945 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.945 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:22.945 09:27:08 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:23.883 09:27:09 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:23.883 09:27:09 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:23.883 09:27:09 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:23.883 09:27:09 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:23.883 09:27:09 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:23.883 09:27:09 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:23.883 09:27:09 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:23.883 09:27:09 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:23.883 09:27:09 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:24.142 09:27:09 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:24.142 09:27:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:24.142 09:27:09 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:24.401 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.401 Waiting for block devices as requested 00:04:24.401 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.660 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:24.660 09:27:10 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:24.660 09:27:10 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:24.660 09:27:10 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:24.660 09:27:10 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:24.660 09:27:10 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:24.660 09:27:10 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1541 -- # continue 00:04:24.660 09:27:10 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:24.660 09:27:10 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:24.660 09:27:10 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:24.660 09:27:10 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:24.660 09:27:10 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:24.660 09:27:10 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:24.660 09:27:10 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:24.660 09:27:10 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:24.660 09:27:10 -- common/autotest_common.sh@1541 -- # continue 00:04:24.660 09:27:10 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:24.660 09:27:10 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:24.660 09:27:10 -- common/autotest_common.sh@10 -- # set +x 00:04:24.660 09:27:10 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:24.660 09:27:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:24.660 09:27:10 -- common/autotest_common.sh@10 -- # set +x 00:04:24.660 09:27:10 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:25.604 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:25.604 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.604 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:25.604 09:27:11 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:25.604 09:27:11 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:25.604 09:27:11 -- common/autotest_common.sh@10 -- # set +x 00:04:25.604 09:27:11 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:25.604 09:27:11 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:25.604 09:27:11 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:25.604 09:27:11 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:25.604 09:27:11 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:25.604 09:27:11 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:25.604 09:27:11 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:25.604 09:27:11 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:25.604 09:27:11 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:25.604 09:27:11 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:25.604 09:27:11 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:25.604 09:27:11 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:25.604 09:27:11 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:25.604 09:27:11 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:25.604 09:27:11 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:25.604 09:27:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:25.604 09:27:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:25.604 09:27:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:25.604 09:27:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.604 09:27:11 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:25.604 09:27:11 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:25.604 09:27:11 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:25.604 09:27:11 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:25.604 09:27:11 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:25.604 09:27:11 -- common/autotest_common.sh@1570 -- # return 0 00:04:25.604 09:27:11 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:25.604 09:27:11 -- common/autotest_common.sh@1578 -- # return 0 00:04:25.604 09:27:11 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:25.604 09:27:11 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:25.604 09:27:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.604 09:27:11 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:25.604 09:27:11 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:25.604 09:27:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:25.604 09:27:11 -- common/autotest_common.sh@10 -- # set +x 00:04:25.604 09:27:11 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:04:25.604 09:27:11 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:04:25.604 09:27:11 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:04:25.604 09:27:11 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:25.604 09:27:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.604 09:27:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.604 09:27:11 -- common/autotest_common.sh@10 -- # set +x 00:04:25.604 ************************************ 00:04:25.604 START TEST env 00:04:25.604 ************************************ 00:04:25.604 09:27:11 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:25.897 * Looking for test storage... 00:04:25.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:25.897 09:27:11 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:25.897 09:27:11 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:25.897 09:27:11 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:25.897 09:27:11 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:25.897 09:27:11 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.897 09:27:11 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.898 09:27:11 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.898 09:27:11 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.898 09:27:11 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.898 09:27:11 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.898 09:27:11 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.898 09:27:11 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.898 09:27:11 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.898 09:27:11 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.898 09:27:11 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.898 09:27:11 env -- scripts/common.sh@344 -- # case "$op" in 00:04:25.898 09:27:11 env -- scripts/common.sh@345 -- # : 1 00:04:25.898 09:27:11 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.898 09:27:11 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.898 09:27:11 env -- scripts/common.sh@365 -- # decimal 1 00:04:25.898 09:27:11 env -- scripts/common.sh@353 -- # local d=1 00:04:25.898 09:27:11 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.898 09:27:11 env -- scripts/common.sh@355 -- # echo 1 00:04:25.898 09:27:11 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.898 09:27:11 env -- scripts/common.sh@366 -- # decimal 2 00:04:25.898 09:27:11 env -- scripts/common.sh@353 -- # local d=2 00:04:25.898 09:27:11 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.898 09:27:11 env -- scripts/common.sh@355 -- # echo 2 00:04:25.898 09:27:11 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.898 09:27:11 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.898 09:27:11 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.898 09:27:11 env -- scripts/common.sh@368 -- # return 0 00:04:25.898 09:27:11 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.898 09:27:11 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:25.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.898 --rc genhtml_branch_coverage=1 00:04:25.898 --rc genhtml_function_coverage=1 00:04:25.898 --rc genhtml_legend=1 00:04:25.898 --rc geninfo_all_blocks=1 00:04:25.898 --rc geninfo_unexecuted_blocks=1 00:04:25.898 00:04:25.898 ' 00:04:25.898 09:27:11 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:25.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.898 --rc genhtml_branch_coverage=1 00:04:25.898 --rc genhtml_function_coverage=1 00:04:25.898 --rc genhtml_legend=1 00:04:25.898 --rc geninfo_all_blocks=1 00:04:25.898 --rc geninfo_unexecuted_blocks=1 00:04:25.898 00:04:25.898 ' 00:04:25.898 09:27:11 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:25.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.898 --rc genhtml_branch_coverage=1 00:04:25.898 --rc genhtml_function_coverage=1 00:04:25.898 --rc genhtml_legend=1 00:04:25.898 --rc geninfo_all_blocks=1 00:04:25.898 --rc geninfo_unexecuted_blocks=1 00:04:25.898 00:04:25.898 ' 00:04:25.898 09:27:11 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:25.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.898 --rc genhtml_branch_coverage=1 00:04:25.898 --rc genhtml_function_coverage=1 00:04:25.898 --rc genhtml_legend=1 00:04:25.898 --rc geninfo_all_blocks=1 00:04:25.898 --rc geninfo_unexecuted_blocks=1 00:04:25.898 00:04:25.898 ' 00:04:25.898 09:27:11 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:25.898 09:27:11 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:25.898 09:27:11 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:25.898 09:27:11 env -- common/autotest_common.sh@10 -- # set +x 00:04:25.898 ************************************ 00:04:25.898 START TEST env_memory 00:04:25.898 ************************************ 00:04:25.898 09:27:11 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:25.898 00:04:25.898 00:04:25.898 CUnit - A unit testing framework for C - Version 2.1-3 00:04:25.898 http://cunit.sourceforge.net/ 00:04:25.898 00:04:25.898 00:04:25.898 Suite: memory 00:04:25.898 Test: alloc and free memory map ...[2024-11-05 09:27:11.844680] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:26.156 passed 00:04:26.156 Test: mem map translation ...[2024-11-05 09:27:11.875688] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:26.156 [2024-11-05 09:27:11.875896] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:26.156 [2024-11-05 09:27:11.876147] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:26.156 [2024-11-05 09:27:11.876263] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:26.156 passed 00:04:26.156 Test: mem map registration ...[2024-11-05 09:27:11.940386] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:26.156 [2024-11-05 09:27:11.940572] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:26.156 passed 00:04:26.156 Test: mem map adjacent registrations ...passed 00:04:26.156 00:04:26.156 Run Summary: Type Total Ran Passed Failed Inactive 00:04:26.156 suites 1 1 n/a 0 0 00:04:26.156 tests 4 4 4 0 0 00:04:26.156 asserts 152 152 152 0 n/a 00:04:26.156 00:04:26.156 Elapsed time = 0.213 seconds 00:04:26.156 00:04:26.156 real 0m0.232s 00:04:26.156 user 0m0.211s 00:04:26.156 sys 0m0.015s 00:04:26.156 09:27:12 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:26.156 ************************************ 00:04:26.156 END TEST env_memory 00:04:26.156 ************************************ 00:04:26.156 09:27:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:26.156 09:27:12 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.156 09:27:12 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:26.156 09:27:12 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:26.156 09:27:12 env -- common/autotest_common.sh@10 -- # set +x 00:04:26.156 ************************************ 00:04:26.156 START TEST env_vtophys 00:04:26.156 ************************************ 00:04:26.156 09:27:12 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:26.156 EAL: lib.eal log level changed from notice to debug 00:04:26.156 EAL: Detected lcore 0 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 1 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 2 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 3 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 4 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 5 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 6 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 7 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 8 as core 0 on socket 0 00:04:26.156 EAL: Detected lcore 9 as core 0 on socket 0 00:04:26.156 EAL: Maximum logical cores by configuration: 128 00:04:26.157 EAL: Detected CPU lcores: 10 00:04:26.157 EAL: Detected NUMA nodes: 1 00:04:26.157 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:26.157 EAL: Detected shared linkage of DPDK 00:04:26.416 EAL: No shared files mode enabled, IPC will be disabled 00:04:26.416 EAL: Selected IOVA mode 'PA' 00:04:26.416 EAL: Probing VFIO support... 00:04:26.416 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.416 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:26.416 EAL: Ask a virtual area of 0x2e000 bytes 00:04:26.416 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:26.416 EAL: Setting up physically contiguous memory... 00:04:26.416 EAL: Setting maximum number of open files to 524288 00:04:26.416 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:26.416 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:26.416 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.416 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:26.416 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.416 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.416 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:26.416 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:26.416 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.416 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:26.416 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.416 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.416 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:26.416 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:26.416 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.416 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:26.416 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.416 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.416 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:26.416 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:26.416 EAL: Ask a virtual area of 0x61000 bytes 00:04:26.416 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:26.416 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:26.416 EAL: Ask a virtual area of 0x400000000 bytes 00:04:26.416 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:26.416 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:26.416 EAL: Hugepages will be freed exactly as allocated. 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: TSC frequency is ~2200000 KHz 00:04:26.416 EAL: Main lcore 0 is ready (tid=7f57c9872a00;cpuset=[0]) 00:04:26.416 EAL: Trying to obtain current memory policy. 00:04:26.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.416 EAL: Restoring previous memory policy: 0 00:04:26.416 EAL: request: mp_malloc_sync 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: Heap on socket 0 was expanded by 2MB 00:04:26.416 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:26.416 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:26.416 EAL: Mem event callback 'spdk:(nil)' registered 00:04:26.416 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:26.416 00:04:26.416 00:04:26.416 CUnit - A unit testing framework for C - Version 2.1-3 00:04:26.416 http://cunit.sourceforge.net/ 00:04:26.416 00:04:26.416 00:04:26.416 Suite: components_suite 00:04:26.416 Test: vtophys_malloc_test ...passed 00:04:26.416 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:26.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.416 EAL: Restoring previous memory policy: 4 00:04:26.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.416 EAL: request: mp_malloc_sync 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: Heap on socket 0 was expanded by 4MB 00:04:26.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.416 EAL: request: mp_malloc_sync 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: Heap on socket 0 was shrunk by 4MB 00:04:26.416 EAL: Trying to obtain current memory policy. 00:04:26.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.416 EAL: Restoring previous memory policy: 4 00:04:26.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.416 EAL: request: mp_malloc_sync 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: Heap on socket 0 was expanded by 6MB 00:04:26.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.416 EAL: request: mp_malloc_sync 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: Heap on socket 0 was shrunk by 6MB 00:04:26.416 EAL: Trying to obtain current memory policy. 00:04:26.416 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.416 EAL: Restoring previous memory policy: 4 00:04:26.416 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.416 EAL: request: mp_malloc_sync 00:04:26.416 EAL: No shared files mode enabled, IPC is disabled 00:04:26.416 EAL: Heap on socket 0 was expanded by 10MB 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was shrunk by 10MB 00:04:26.417 EAL: Trying to obtain current memory policy. 00:04:26.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.417 EAL: Restoring previous memory policy: 4 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was expanded by 18MB 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was shrunk by 18MB 00:04:26.417 EAL: Trying to obtain current memory policy. 00:04:26.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.417 EAL: Restoring previous memory policy: 4 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was expanded by 34MB 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was shrunk by 34MB 00:04:26.417 EAL: Trying to obtain current memory policy. 00:04:26.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.417 EAL: Restoring previous memory policy: 4 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was expanded by 66MB 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was shrunk by 66MB 00:04:26.417 EAL: Trying to obtain current memory policy. 00:04:26.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.417 EAL: Restoring previous memory policy: 4 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was expanded by 130MB 00:04:26.417 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.417 EAL: request: mp_malloc_sync 00:04:26.417 EAL: No shared files mode enabled, IPC is disabled 00:04:26.417 EAL: Heap on socket 0 was shrunk by 130MB 00:04:26.417 EAL: Trying to obtain current memory policy. 00:04:26.417 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.676 EAL: Restoring previous memory policy: 4 00:04:26.676 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.676 EAL: request: mp_malloc_sync 00:04:26.676 EAL: No shared files mode enabled, IPC is disabled 00:04:26.676 EAL: Heap on socket 0 was expanded by 258MB 00:04:26.676 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.676 EAL: request: mp_malloc_sync 00:04:26.676 EAL: No shared files mode enabled, IPC is disabled 00:04:26.676 EAL: Heap on socket 0 was shrunk by 258MB 00:04:26.676 EAL: Trying to obtain current memory policy. 00:04:26.676 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.676 EAL: Restoring previous memory policy: 4 00:04:26.676 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.676 EAL: request: mp_malloc_sync 00:04:26.676 EAL: No shared files mode enabled, IPC is disabled 00:04:26.676 EAL: Heap on socket 0 was expanded by 514MB 00:04:26.676 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.935 EAL: request: mp_malloc_sync 00:04:26.935 EAL: No shared files mode enabled, IPC is disabled 00:04:26.935 EAL: Heap on socket 0 was shrunk by 514MB 00:04:26.935 EAL: Trying to obtain current memory policy. 00:04:26.935 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:26.935 EAL: Restoring previous memory policy: 4 00:04:26.935 EAL: Calling mem event callback 'spdk:(nil)' 00:04:26.935 EAL: request: mp_malloc_sync 00:04:26.935 EAL: No shared files mode enabled, IPC is disabled 00:04:26.935 EAL: Heap on socket 0 was expanded by 1026MB 00:04:27.193 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.194 passed 00:04:27.194 00:04:27.194 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.194 suites 1 1 n/a 0 0 00:04:27.194 tests 2 2 2 0 0 00:04:27.194 asserts 5505 5505 5505 0 n/a 00:04:27.194 00:04:27.194 Elapsed time = 0.726 seconds 00:04:27.194 EAL: request: mp_malloc_sync 00:04:27.194 EAL: No shared files mode enabled, IPC is disabled 00:04:27.194 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:27.194 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.194 EAL: request: mp_malloc_sync 00:04:27.194 EAL: No shared files mode enabled, IPC is disabled 00:04:27.194 EAL: Heap on socket 0 was shrunk by 2MB 00:04:27.194 EAL: No shared files mode enabled, IPC is disabled 00:04:27.194 EAL: No shared files mode enabled, IPC is disabled 00:04:27.194 EAL: No shared files mode enabled, IPC is disabled 00:04:27.194 00:04:27.194 real 0m0.939s 00:04:27.194 user 0m0.470s 00:04:27.194 sys 0m0.336s 00:04:27.194 09:27:13 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.194 ************************************ 00:04:27.194 END TEST env_vtophys 00:04:27.194 ************************************ 00:04:27.194 09:27:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:27.194 09:27:13 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:27.194 09:27:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.194 09:27:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.194 09:27:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.194 ************************************ 00:04:27.194 START TEST env_pci 00:04:27.194 ************************************ 00:04:27.194 09:27:13 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:27.194 00:04:27.194 00:04:27.194 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.194 http://cunit.sourceforge.net/ 00:04:27.194 00:04:27.194 00:04:27.194 Suite: pci 00:04:27.194 Test: pci_hook ...[2024-11-05 09:27:13.087753] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56647 has claimed it 00:04:27.194 passed 00:04:27.194 00:04:27.194 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.194 suites 1 1 n/a 0 0 00:04:27.194 tests 1 1 1 0 0 00:04:27.194 asserts 25 25 25 0 n/a 00:04:27.194 00:04:27.194 Elapsed time = 0.002 secondsEAL: Cannot find device (10000:00:01.0) 00:04:27.194 EAL: Failed to attach device on primary process 00:04:27.194 00:04:27.194 00:04:27.194 real 0m0.022s 00:04:27.194 user 0m0.012s 00:04:27.194 sys 0m0.010s 00:04:27.194 09:27:13 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.194 ************************************ 00:04:27.194 END TEST env_pci 00:04:27.194 ************************************ 00:04:27.194 09:27:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:27.194 09:27:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:27.194 09:27:13 env -- env/env.sh@15 -- # uname 00:04:27.194 09:27:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:27.194 09:27:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:27.194 09:27:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.194 09:27:13 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:27.194 09:27:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.194 09:27:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.194 ************************************ 00:04:27.194 START TEST env_dpdk_post_init 00:04:27.194 ************************************ 00:04:27.194 09:27:13 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:27.453 EAL: Detected CPU lcores: 10 00:04:27.453 EAL: Detected NUMA nodes: 1 00:04:27.453 EAL: Detected shared linkage of DPDK 00:04:27.453 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.453 EAL: Selected IOVA mode 'PA' 00:04:27.453 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.453 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:27.453 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:27.453 Starting DPDK initialization... 00:04:27.453 Starting SPDK post initialization... 00:04:27.453 SPDK NVMe probe 00:04:27.453 Attaching to 0000:00:10.0 00:04:27.453 Attaching to 0000:00:11.0 00:04:27.453 Attached to 0000:00:10.0 00:04:27.453 Attached to 0000:00:11.0 00:04:27.453 Cleaning up... 00:04:27.453 00:04:27.453 real 0m0.184s 00:04:27.453 user 0m0.047s 00:04:27.453 sys 0m0.038s 00:04:27.453 09:27:13 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.453 ************************************ 00:04:27.453 END TEST env_dpdk_post_init 00:04:27.453 ************************************ 00:04:27.453 09:27:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:27.453 09:27:13 env -- env/env.sh@26 -- # uname 00:04:27.453 09:27:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:27.453 09:27:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.453 09:27:13 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.453 09:27:13 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.453 09:27:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.453 ************************************ 00:04:27.453 START TEST env_mem_callbacks 00:04:27.453 ************************************ 00:04:27.453 09:27:13 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:27.453 EAL: Detected CPU lcores: 10 00:04:27.453 EAL: Detected NUMA nodes: 1 00:04:27.453 EAL: Detected shared linkage of DPDK 00:04:27.453 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:27.453 EAL: Selected IOVA mode 'PA' 00:04:27.712 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:27.712 00:04:27.712 00:04:27.712 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.712 http://cunit.sourceforge.net/ 00:04:27.712 00:04:27.712 00:04:27.712 Suite: memory 00:04:27.712 Test: test ... 00:04:27.712 register 0x200000200000 2097152 00:04:27.712 malloc 3145728 00:04:27.712 register 0x200000400000 4194304 00:04:27.712 buf 0x200000500000 len 3145728 PASSED 00:04:27.712 malloc 64 00:04:27.712 buf 0x2000004fff40 len 64 PASSED 00:04:27.712 malloc 4194304 00:04:27.712 register 0x200000800000 6291456 00:04:27.712 buf 0x200000a00000 len 4194304 PASSED 00:04:27.712 free 0x200000500000 3145728 00:04:27.712 free 0x2000004fff40 64 00:04:27.712 unregister 0x200000400000 4194304 PASSED 00:04:27.712 free 0x200000a00000 4194304 00:04:27.712 unregister 0x200000800000 6291456 PASSED 00:04:27.712 malloc 8388608 00:04:27.712 register 0x200000400000 10485760 00:04:27.712 buf 0x200000600000 len 8388608 PASSED 00:04:27.712 free 0x200000600000 8388608 00:04:27.712 unregister 0x200000400000 10485760 PASSED 00:04:27.712 passed 00:04:27.712 00:04:27.712 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.712 suites 1 1 n/a 0 0 00:04:27.712 tests 1 1 1 0 0 00:04:27.712 asserts 15 15 15 0 n/a 00:04:27.712 00:04:27.712 Elapsed time = 0.007 seconds 00:04:27.712 00:04:27.712 real 0m0.139s 00:04:27.712 user 0m0.011s 00:04:27.712 sys 0m0.027s 00:04:27.712 09:27:13 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.712 09:27:13 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:27.712 ************************************ 00:04:27.712 END TEST env_mem_callbacks 00:04:27.712 ************************************ 00:04:27.712 ************************************ 00:04:27.712 END TEST env 00:04:27.712 ************************************ 00:04:27.712 00:04:27.712 real 0m2.010s 00:04:27.712 user 0m0.988s 00:04:27.712 sys 0m0.660s 00:04:27.712 09:27:13 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:27.712 09:27:13 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.712 09:27:13 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:27.712 09:27:13 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:27.712 09:27:13 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:27.712 09:27:13 -- common/autotest_common.sh@10 -- # set +x 00:04:27.712 ************************************ 00:04:27.712 START TEST rpc 00:04:27.712 ************************************ 00:04:27.712 09:27:13 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:27.972 * Looking for test storage... 00:04:27.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.972 09:27:13 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.972 09:27:13 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.972 09:27:13 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.972 09:27:13 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.972 09:27:13 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.972 09:27:13 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.972 09:27:13 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.972 09:27:13 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:27.972 09:27:13 rpc -- scripts/common.sh@345 -- # : 1 00:04:27.972 09:27:13 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.972 09:27:13 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.972 09:27:13 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:27.972 09:27:13 rpc -- scripts/common.sh@353 -- # local d=1 00:04:27.972 09:27:13 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.972 09:27:13 rpc -- scripts/common.sh@355 -- # echo 1 00:04:27.972 09:27:13 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.972 09:27:13 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@353 -- # local d=2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.972 09:27:13 rpc -- scripts/common.sh@355 -- # echo 2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.972 09:27:13 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.972 09:27:13 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.972 09:27:13 rpc -- scripts/common.sh@368 -- # return 0 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:27.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.972 --rc genhtml_branch_coverage=1 00:04:27.972 --rc genhtml_function_coverage=1 00:04:27.972 --rc genhtml_legend=1 00:04:27.972 --rc geninfo_all_blocks=1 00:04:27.972 --rc geninfo_unexecuted_blocks=1 00:04:27.972 00:04:27.972 ' 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:27.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.972 --rc genhtml_branch_coverage=1 00:04:27.972 --rc genhtml_function_coverage=1 00:04:27.972 --rc genhtml_legend=1 00:04:27.972 --rc geninfo_all_blocks=1 00:04:27.972 --rc geninfo_unexecuted_blocks=1 00:04:27.972 00:04:27.972 ' 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:27.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.972 --rc genhtml_branch_coverage=1 00:04:27.972 --rc genhtml_function_coverage=1 00:04:27.972 --rc genhtml_legend=1 00:04:27.972 --rc geninfo_all_blocks=1 00:04:27.972 --rc geninfo_unexecuted_blocks=1 00:04:27.972 00:04:27.972 ' 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:27.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.972 --rc genhtml_branch_coverage=1 00:04:27.972 --rc genhtml_function_coverage=1 00:04:27.972 --rc genhtml_legend=1 00:04:27.972 --rc geninfo_all_blocks=1 00:04:27.972 --rc geninfo_unexecuted_blocks=1 00:04:27.972 00:04:27.972 ' 00:04:27.972 09:27:13 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56770 00:04:27.972 09:27:13 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:27.972 09:27:13 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:27.972 09:27:13 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56770 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@833 -- # '[' -z 56770 ']' 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:27.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:27.972 09:27:13 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:27.972 [2024-11-05 09:27:13.852534] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:27.972 [2024-11-05 09:27:13.852642] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56770 ] 00:04:28.231 [2024-11-05 09:27:13.999949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.231 [2024-11-05 09:27:14.029149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:28.231 [2024-11-05 09:27:14.029216] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56770' to capture a snapshot of events at runtime. 00:04:28.231 [2024-11-05 09:27:14.029240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:28.231 [2024-11-05 09:27:14.029248] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:28.231 [2024-11-05 09:27:14.029254] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56770 for offline analysis/debug. 00:04:28.231 [2024-11-05 09:27:14.029611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.231 [2024-11-05 09:27:14.066104] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:28.231 09:27:14 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:28.231 09:27:14 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:28.231 09:27:14 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.231 09:27:14 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:28.231 09:27:14 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:28.231 09:27:14 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:28.231 09:27:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.231 09:27:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.231 09:27:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.490 ************************************ 00:04:28.490 START TEST rpc_integrity 00:04:28.490 ************************************ 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:28.490 { 00:04:28.490 "name": "Malloc0", 00:04:28.490 "aliases": [ 00:04:28.490 "d7935a38-9483-4b4e-9063-195f87f1172b" 00:04:28.490 ], 00:04:28.490 "product_name": "Malloc disk", 00:04:28.490 "block_size": 512, 00:04:28.490 "num_blocks": 16384, 00:04:28.490 "uuid": "d7935a38-9483-4b4e-9063-195f87f1172b", 00:04:28.490 "assigned_rate_limits": { 00:04:28.490 "rw_ios_per_sec": 0, 00:04:28.490 "rw_mbytes_per_sec": 0, 00:04:28.490 "r_mbytes_per_sec": 0, 00:04:28.490 "w_mbytes_per_sec": 0 00:04:28.490 }, 00:04:28.490 "claimed": false, 00:04:28.490 "zoned": false, 00:04:28.490 "supported_io_types": { 00:04:28.490 "read": true, 00:04:28.490 "write": true, 00:04:28.490 "unmap": true, 00:04:28.490 "flush": true, 00:04:28.490 "reset": true, 00:04:28.490 "nvme_admin": false, 00:04:28.490 "nvme_io": false, 00:04:28.490 "nvme_io_md": false, 00:04:28.490 "write_zeroes": true, 00:04:28.490 "zcopy": true, 00:04:28.490 "get_zone_info": false, 00:04:28.490 "zone_management": false, 00:04:28.490 "zone_append": false, 00:04:28.490 "compare": false, 00:04:28.490 "compare_and_write": false, 00:04:28.490 "abort": true, 00:04:28.490 "seek_hole": false, 00:04:28.490 "seek_data": false, 00:04:28.490 "copy": true, 00:04:28.490 "nvme_iov_md": false 00:04:28.490 }, 00:04:28.490 "memory_domains": [ 00:04:28.490 { 00:04:28.490 "dma_device_id": "system", 00:04:28.490 "dma_device_type": 1 00:04:28.490 }, 00:04:28.490 { 00:04:28.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.490 "dma_device_type": 2 00:04:28.490 } 00:04:28.490 ], 00:04:28.490 "driver_specific": {} 00:04:28.490 } 00:04:28.490 ]' 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:28.490 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.490 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.490 [2024-11-05 09:27:14.349836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:28.490 [2024-11-05 09:27:14.349894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.490 [2024-11-05 09:27:14.349912] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1d2ff10 00:04:28.490 [2024-11-05 09:27:14.349921] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.490 [2024-11-05 09:27:14.351407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.491 [2024-11-05 09:27:14.351454] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:28.491 Passthru0 00:04:28.491 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.491 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:28.491 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.491 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.491 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.491 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:28.491 { 00:04:28.491 "name": "Malloc0", 00:04:28.491 "aliases": [ 00:04:28.491 "d7935a38-9483-4b4e-9063-195f87f1172b" 00:04:28.491 ], 00:04:28.491 "product_name": "Malloc disk", 00:04:28.491 "block_size": 512, 00:04:28.491 "num_blocks": 16384, 00:04:28.491 "uuid": "d7935a38-9483-4b4e-9063-195f87f1172b", 00:04:28.491 "assigned_rate_limits": { 00:04:28.491 "rw_ios_per_sec": 0, 00:04:28.491 "rw_mbytes_per_sec": 0, 00:04:28.491 "r_mbytes_per_sec": 0, 00:04:28.491 "w_mbytes_per_sec": 0 00:04:28.491 }, 00:04:28.491 "claimed": true, 00:04:28.491 "claim_type": "exclusive_write", 00:04:28.491 "zoned": false, 00:04:28.491 "supported_io_types": { 00:04:28.491 "read": true, 00:04:28.491 "write": true, 00:04:28.491 "unmap": true, 00:04:28.491 "flush": true, 00:04:28.491 "reset": true, 00:04:28.491 "nvme_admin": false, 00:04:28.491 "nvme_io": false, 00:04:28.491 "nvme_io_md": false, 00:04:28.491 "write_zeroes": true, 00:04:28.491 "zcopy": true, 00:04:28.491 "get_zone_info": false, 00:04:28.491 "zone_management": false, 00:04:28.491 "zone_append": false, 00:04:28.491 "compare": false, 00:04:28.491 "compare_and_write": false, 00:04:28.491 "abort": true, 00:04:28.491 "seek_hole": false, 00:04:28.491 "seek_data": false, 00:04:28.491 "copy": true, 00:04:28.491 "nvme_iov_md": false 00:04:28.491 }, 00:04:28.491 "memory_domains": [ 00:04:28.491 { 00:04:28.491 "dma_device_id": "system", 00:04:28.491 "dma_device_type": 1 00:04:28.491 }, 00:04:28.491 { 00:04:28.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.491 "dma_device_type": 2 00:04:28.491 } 00:04:28.491 ], 00:04:28.491 "driver_specific": {} 00:04:28.491 }, 00:04:28.491 { 00:04:28.491 "name": "Passthru0", 00:04:28.491 "aliases": [ 00:04:28.491 "82d8e4d8-e48d-55e4-8a30-2f67c016eb18" 00:04:28.491 ], 00:04:28.491 "product_name": "passthru", 00:04:28.491 "block_size": 512, 00:04:28.491 "num_blocks": 16384, 00:04:28.491 "uuid": "82d8e4d8-e48d-55e4-8a30-2f67c016eb18", 00:04:28.491 "assigned_rate_limits": { 00:04:28.491 "rw_ios_per_sec": 0, 00:04:28.491 "rw_mbytes_per_sec": 0, 00:04:28.491 "r_mbytes_per_sec": 0, 00:04:28.491 "w_mbytes_per_sec": 0 00:04:28.491 }, 00:04:28.491 "claimed": false, 00:04:28.491 "zoned": false, 00:04:28.491 "supported_io_types": { 00:04:28.491 "read": true, 00:04:28.491 "write": true, 00:04:28.491 "unmap": true, 00:04:28.491 "flush": true, 00:04:28.491 "reset": true, 00:04:28.491 "nvme_admin": false, 00:04:28.491 "nvme_io": false, 00:04:28.491 "nvme_io_md": false, 00:04:28.491 "write_zeroes": true, 00:04:28.491 "zcopy": true, 00:04:28.491 "get_zone_info": false, 00:04:28.491 "zone_management": false, 00:04:28.491 "zone_append": false, 00:04:28.491 "compare": false, 00:04:28.491 "compare_and_write": false, 00:04:28.491 "abort": true, 00:04:28.491 "seek_hole": false, 00:04:28.491 "seek_data": false, 00:04:28.491 "copy": true, 00:04:28.491 "nvme_iov_md": false 00:04:28.491 }, 00:04:28.491 "memory_domains": [ 00:04:28.491 { 00:04:28.491 "dma_device_id": "system", 00:04:28.491 "dma_device_type": 1 00:04:28.491 }, 00:04:28.491 { 00:04:28.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.491 "dma_device_type": 2 00:04:28.491 } 00:04:28.491 ], 00:04:28.491 "driver_specific": { 00:04:28.491 "passthru": { 00:04:28.491 "name": "Passthru0", 00:04:28.491 "base_bdev_name": "Malloc0" 00:04:28.491 } 00:04:28.491 } 00:04:28.491 } 00:04:28.491 ]' 00:04:28.491 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:28.491 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:28.491 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:28.491 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.491 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.750 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.750 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.750 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:28.750 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:28.750 09:27:14 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:28.750 00:04:28.750 real 0m0.323s 00:04:28.750 user 0m0.230s 00:04:28.750 sys 0m0.035s 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:28.750 09:27:14 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 ************************************ 00:04:28.750 END TEST rpc_integrity 00:04:28.750 ************************************ 00:04:28.750 09:27:14 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:28.750 09:27:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:28.750 09:27:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:28.750 09:27:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 ************************************ 00:04:28.750 START TEST rpc_plugins 00:04:28.750 ************************************ 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:28.750 { 00:04:28.750 "name": "Malloc1", 00:04:28.750 "aliases": [ 00:04:28.750 "21c19688-a16f-4b72-8e41-b06e4d69559d" 00:04:28.750 ], 00:04:28.750 "product_name": "Malloc disk", 00:04:28.750 "block_size": 4096, 00:04:28.750 "num_blocks": 256, 00:04:28.750 "uuid": "21c19688-a16f-4b72-8e41-b06e4d69559d", 00:04:28.750 "assigned_rate_limits": { 00:04:28.750 "rw_ios_per_sec": 0, 00:04:28.750 "rw_mbytes_per_sec": 0, 00:04:28.750 "r_mbytes_per_sec": 0, 00:04:28.750 "w_mbytes_per_sec": 0 00:04:28.750 }, 00:04:28.750 "claimed": false, 00:04:28.750 "zoned": false, 00:04:28.750 "supported_io_types": { 00:04:28.750 "read": true, 00:04:28.750 "write": true, 00:04:28.750 "unmap": true, 00:04:28.750 "flush": true, 00:04:28.750 "reset": true, 00:04:28.750 "nvme_admin": false, 00:04:28.750 "nvme_io": false, 00:04:28.750 "nvme_io_md": false, 00:04:28.750 "write_zeroes": true, 00:04:28.750 "zcopy": true, 00:04:28.750 "get_zone_info": false, 00:04:28.750 "zone_management": false, 00:04:28.750 "zone_append": false, 00:04:28.750 "compare": false, 00:04:28.750 "compare_and_write": false, 00:04:28.750 "abort": true, 00:04:28.750 "seek_hole": false, 00:04:28.750 "seek_data": false, 00:04:28.750 "copy": true, 00:04:28.750 "nvme_iov_md": false 00:04:28.750 }, 00:04:28.750 "memory_domains": [ 00:04:28.750 { 00:04:28.750 "dma_device_id": "system", 00:04:28.750 "dma_device_type": 1 00:04:28.750 }, 00:04:28.750 { 00:04:28.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:28.750 "dma_device_type": 2 00:04:28.750 } 00:04:28.750 ], 00:04:28.750 "driver_specific": {} 00:04:28.750 } 00:04:28.750 ]' 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:28.750 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:28.750 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:29.009 09:27:14 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:29.009 00:04:29.009 real 0m0.155s 00:04:29.009 user 0m0.098s 00:04:29.009 sys 0m0.020s 00:04:29.009 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.009 ************************************ 00:04:29.009 END TEST rpc_plugins 00:04:29.009 09:27:14 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:29.009 ************************************ 00:04:29.009 09:27:14 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:29.009 09:27:14 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.009 09:27:14 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.009 09:27:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.009 ************************************ 00:04:29.009 START TEST rpc_trace_cmd_test 00:04:29.009 ************************************ 00:04:29.009 09:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:29.009 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:29.009 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:29.009 09:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.009 09:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.009 09:27:14 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.009 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:29.009 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56770", 00:04:29.009 "tpoint_group_mask": "0x8", 00:04:29.009 "iscsi_conn": { 00:04:29.009 "mask": "0x2", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "scsi": { 00:04:29.009 "mask": "0x4", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "bdev": { 00:04:29.009 "mask": "0x8", 00:04:29.009 "tpoint_mask": "0xffffffffffffffff" 00:04:29.009 }, 00:04:29.009 "nvmf_rdma": { 00:04:29.009 "mask": "0x10", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "nvmf_tcp": { 00:04:29.009 "mask": "0x20", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "ftl": { 00:04:29.009 "mask": "0x40", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "blobfs": { 00:04:29.009 "mask": "0x80", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "dsa": { 00:04:29.009 "mask": "0x200", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "thread": { 00:04:29.009 "mask": "0x400", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "nvme_pcie": { 00:04:29.009 "mask": "0x800", 00:04:29.009 "tpoint_mask": "0x0" 00:04:29.009 }, 00:04:29.009 "iaa": { 00:04:29.009 "mask": "0x1000", 00:04:29.010 "tpoint_mask": "0x0" 00:04:29.010 }, 00:04:29.010 "nvme_tcp": { 00:04:29.010 "mask": "0x2000", 00:04:29.010 "tpoint_mask": "0x0" 00:04:29.010 }, 00:04:29.010 "bdev_nvme": { 00:04:29.010 "mask": "0x4000", 00:04:29.010 "tpoint_mask": "0x0" 00:04:29.010 }, 00:04:29.010 "sock": { 00:04:29.010 "mask": "0x8000", 00:04:29.010 "tpoint_mask": "0x0" 00:04:29.010 }, 00:04:29.010 "blob": { 00:04:29.010 "mask": "0x10000", 00:04:29.010 "tpoint_mask": "0x0" 00:04:29.010 }, 00:04:29.010 "bdev_raid": { 00:04:29.010 "mask": "0x20000", 00:04:29.010 "tpoint_mask": "0x0" 00:04:29.010 }, 00:04:29.010 "scheduler": { 00:04:29.010 "mask": "0x40000", 00:04:29.010 "tpoint_mask": "0x0" 00:04:29.010 } 00:04:29.010 }' 00:04:29.010 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:29.010 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:29.010 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:29.010 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:29.010 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:29.010 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:29.010 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:29.268 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:29.268 09:27:14 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:29.269 09:27:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:29.269 00:04:29.269 real 0m0.278s 00:04:29.269 user 0m0.242s 00:04:29.269 sys 0m0.028s 00:04:29.269 09:27:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.269 ************************************ 00:04:29.269 END TEST rpc_trace_cmd_test 00:04:29.269 09:27:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:29.269 ************************************ 00:04:29.269 09:27:15 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:29.269 09:27:15 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:29.269 09:27:15 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:29.269 09:27:15 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.269 09:27:15 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.269 09:27:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.269 ************************************ 00:04:29.269 START TEST rpc_daemon_integrity 00:04:29.269 ************************************ 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:29.269 { 00:04:29.269 "name": "Malloc2", 00:04:29.269 "aliases": [ 00:04:29.269 "d493788a-2713-49b2-a0bb-ceb3896ed80a" 00:04:29.269 ], 00:04:29.269 "product_name": "Malloc disk", 00:04:29.269 "block_size": 512, 00:04:29.269 "num_blocks": 16384, 00:04:29.269 "uuid": "d493788a-2713-49b2-a0bb-ceb3896ed80a", 00:04:29.269 "assigned_rate_limits": { 00:04:29.269 "rw_ios_per_sec": 0, 00:04:29.269 "rw_mbytes_per_sec": 0, 00:04:29.269 "r_mbytes_per_sec": 0, 00:04:29.269 "w_mbytes_per_sec": 0 00:04:29.269 }, 00:04:29.269 "claimed": false, 00:04:29.269 "zoned": false, 00:04:29.269 "supported_io_types": { 00:04:29.269 "read": true, 00:04:29.269 "write": true, 00:04:29.269 "unmap": true, 00:04:29.269 "flush": true, 00:04:29.269 "reset": true, 00:04:29.269 "nvme_admin": false, 00:04:29.269 "nvme_io": false, 00:04:29.269 "nvme_io_md": false, 00:04:29.269 "write_zeroes": true, 00:04:29.269 "zcopy": true, 00:04:29.269 "get_zone_info": false, 00:04:29.269 "zone_management": false, 00:04:29.269 "zone_append": false, 00:04:29.269 "compare": false, 00:04:29.269 "compare_and_write": false, 00:04:29.269 "abort": true, 00:04:29.269 "seek_hole": false, 00:04:29.269 "seek_data": false, 00:04:29.269 "copy": true, 00:04:29.269 "nvme_iov_md": false 00:04:29.269 }, 00:04:29.269 "memory_domains": [ 00:04:29.269 { 00:04:29.269 "dma_device_id": "system", 00:04:29.269 "dma_device_type": 1 00:04:29.269 }, 00:04:29.269 { 00:04:29.269 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.269 "dma_device_type": 2 00:04:29.269 } 00:04:29.269 ], 00:04:29.269 "driver_specific": {} 00:04:29.269 } 00:04:29.269 ]' 00:04:29.269 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.528 [2024-11-05 09:27:15.250255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:29.528 [2024-11-05 09:27:15.250327] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:29.528 [2024-11-05 09:27:15.250343] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1eca980 00:04:29.528 [2024-11-05 09:27:15.250351] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:29.528 [2024-11-05 09:27:15.251707] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:29.528 [2024-11-05 09:27:15.251754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:29.528 Passthru0 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:29.528 { 00:04:29.528 "name": "Malloc2", 00:04:29.528 "aliases": [ 00:04:29.528 "d493788a-2713-49b2-a0bb-ceb3896ed80a" 00:04:29.528 ], 00:04:29.528 "product_name": "Malloc disk", 00:04:29.528 "block_size": 512, 00:04:29.528 "num_blocks": 16384, 00:04:29.528 "uuid": "d493788a-2713-49b2-a0bb-ceb3896ed80a", 00:04:29.528 "assigned_rate_limits": { 00:04:29.528 "rw_ios_per_sec": 0, 00:04:29.528 "rw_mbytes_per_sec": 0, 00:04:29.528 "r_mbytes_per_sec": 0, 00:04:29.528 "w_mbytes_per_sec": 0 00:04:29.528 }, 00:04:29.528 "claimed": true, 00:04:29.528 "claim_type": "exclusive_write", 00:04:29.528 "zoned": false, 00:04:29.528 "supported_io_types": { 00:04:29.528 "read": true, 00:04:29.528 "write": true, 00:04:29.528 "unmap": true, 00:04:29.528 "flush": true, 00:04:29.528 "reset": true, 00:04:29.528 "nvme_admin": false, 00:04:29.528 "nvme_io": false, 00:04:29.528 "nvme_io_md": false, 00:04:29.528 "write_zeroes": true, 00:04:29.528 "zcopy": true, 00:04:29.528 "get_zone_info": false, 00:04:29.528 "zone_management": false, 00:04:29.528 "zone_append": false, 00:04:29.528 "compare": false, 00:04:29.528 "compare_and_write": false, 00:04:29.528 "abort": true, 00:04:29.528 "seek_hole": false, 00:04:29.528 "seek_data": false, 00:04:29.528 "copy": true, 00:04:29.528 "nvme_iov_md": false 00:04:29.528 }, 00:04:29.528 "memory_domains": [ 00:04:29.528 { 00:04:29.528 "dma_device_id": "system", 00:04:29.528 "dma_device_type": 1 00:04:29.528 }, 00:04:29.528 { 00:04:29.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.528 "dma_device_type": 2 00:04:29.528 } 00:04:29.528 ], 00:04:29.528 "driver_specific": {} 00:04:29.528 }, 00:04:29.528 { 00:04:29.528 "name": "Passthru0", 00:04:29.528 "aliases": [ 00:04:29.528 "f4ca6ec3-a654-5763-becd-59c53fb5d13b" 00:04:29.528 ], 00:04:29.528 "product_name": "passthru", 00:04:29.528 "block_size": 512, 00:04:29.528 "num_blocks": 16384, 00:04:29.528 "uuid": "f4ca6ec3-a654-5763-becd-59c53fb5d13b", 00:04:29.528 "assigned_rate_limits": { 00:04:29.528 "rw_ios_per_sec": 0, 00:04:29.528 "rw_mbytes_per_sec": 0, 00:04:29.528 "r_mbytes_per_sec": 0, 00:04:29.528 "w_mbytes_per_sec": 0 00:04:29.528 }, 00:04:29.528 "claimed": false, 00:04:29.528 "zoned": false, 00:04:29.528 "supported_io_types": { 00:04:29.528 "read": true, 00:04:29.528 "write": true, 00:04:29.528 "unmap": true, 00:04:29.528 "flush": true, 00:04:29.528 "reset": true, 00:04:29.528 "nvme_admin": false, 00:04:29.528 "nvme_io": false, 00:04:29.528 "nvme_io_md": false, 00:04:29.528 "write_zeroes": true, 00:04:29.528 "zcopy": true, 00:04:29.528 "get_zone_info": false, 00:04:29.528 "zone_management": false, 00:04:29.528 "zone_append": false, 00:04:29.528 "compare": false, 00:04:29.528 "compare_and_write": false, 00:04:29.528 "abort": true, 00:04:29.528 "seek_hole": false, 00:04:29.528 "seek_data": false, 00:04:29.528 "copy": true, 00:04:29.528 "nvme_iov_md": false 00:04:29.528 }, 00:04:29.528 "memory_domains": [ 00:04:29.528 { 00:04:29.528 "dma_device_id": "system", 00:04:29.528 "dma_device_type": 1 00:04:29.528 }, 00:04:29.528 { 00:04:29.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:29.528 "dma_device_type": 2 00:04:29.528 } 00:04:29.528 ], 00:04:29.528 "driver_specific": { 00:04:29.528 "passthru": { 00:04:29.528 "name": "Passthru0", 00:04:29.528 "base_bdev_name": "Malloc2" 00:04:29.528 } 00:04:29.528 } 00:04:29.528 } 00:04:29.528 ]' 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:29.528 09:27:15 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:29.528 00:04:29.528 real 0m0.314s 00:04:29.528 user 0m0.217s 00:04:29.528 sys 0m0.034s 00:04:29.529 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.529 09:27:15 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:29.529 ************************************ 00:04:29.529 END TEST rpc_daemon_integrity 00:04:29.529 ************************************ 00:04:29.529 09:27:15 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:29.529 09:27:15 rpc -- rpc/rpc.sh@84 -- # killprocess 56770 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@952 -- # '[' -z 56770 ']' 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@956 -- # kill -0 56770 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@957 -- # uname 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56770 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:29.529 killing process with pid 56770 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56770' 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@971 -- # kill 56770 00:04:29.529 09:27:15 rpc -- common/autotest_common.sh@976 -- # wait 56770 00:04:29.788 00:04:29.788 real 0m2.098s 00:04:29.788 user 0m2.894s 00:04:29.788 sys 0m0.525s 00:04:29.788 09:27:15 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:29.788 09:27:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.788 ************************************ 00:04:29.788 END TEST rpc 00:04:29.788 ************************************ 00:04:29.788 09:27:15 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:29.788 09:27:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:29.788 09:27:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:29.788 09:27:15 -- common/autotest_common.sh@10 -- # set +x 00:04:30.046 ************************************ 00:04:30.046 START TEST skip_rpc 00:04:30.046 ************************************ 00:04:30.046 09:27:15 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:30.046 * Looking for test storage... 00:04:30.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:30.046 09:27:15 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:30.046 09:27:15 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:30.046 09:27:15 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:30.046 09:27:15 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:30.046 09:27:15 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.047 09:27:15 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.047 --rc genhtml_branch_coverage=1 00:04:30.047 --rc genhtml_function_coverage=1 00:04:30.047 --rc genhtml_legend=1 00:04:30.047 --rc geninfo_all_blocks=1 00:04:30.047 --rc geninfo_unexecuted_blocks=1 00:04:30.047 00:04:30.047 ' 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.047 --rc genhtml_branch_coverage=1 00:04:30.047 --rc genhtml_function_coverage=1 00:04:30.047 --rc genhtml_legend=1 00:04:30.047 --rc geninfo_all_blocks=1 00:04:30.047 --rc geninfo_unexecuted_blocks=1 00:04:30.047 00:04:30.047 ' 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.047 --rc genhtml_branch_coverage=1 00:04:30.047 --rc genhtml_function_coverage=1 00:04:30.047 --rc genhtml_legend=1 00:04:30.047 --rc geninfo_all_blocks=1 00:04:30.047 --rc geninfo_unexecuted_blocks=1 00:04:30.047 00:04:30.047 ' 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:30.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.047 --rc genhtml_branch_coverage=1 00:04:30.047 --rc genhtml_function_coverage=1 00:04:30.047 --rc genhtml_legend=1 00:04:30.047 --rc geninfo_all_blocks=1 00:04:30.047 --rc geninfo_unexecuted_blocks=1 00:04:30.047 00:04:30.047 ' 00:04:30.047 09:27:15 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:30.047 09:27:15 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:30.047 09:27:15 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:30.047 09:27:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.047 ************************************ 00:04:30.047 START TEST skip_rpc 00:04:30.047 ************************************ 00:04:30.047 09:27:15 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:30.047 09:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56963 00:04:30.047 09:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:30.047 09:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:30.047 09:27:15 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:30.306 [2024-11-05 09:27:16.014274] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:30.306 [2024-11-05 09:27:16.014372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56963 ] 00:04:30.306 [2024-11-05 09:27:16.150479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.306 [2024-11-05 09:27:16.182860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:30.306 [2024-11-05 09:27:16.221462] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56963 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 56963 ']' 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 56963 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:35.577 09:27:20 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 56963 00:04:35.577 09:27:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:35.577 09:27:21 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:35.577 killing process with pid 56963 00:04:35.577 09:27:21 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 56963' 00:04:35.577 09:27:21 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 56963 00:04:35.577 09:27:21 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 56963 00:04:35.577 00:04:35.577 real 0m5.264s 00:04:35.577 user 0m5.011s 00:04:35.577 sys 0m0.173s 00:04:35.577 09:27:21 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:35.577 09:27:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.577 ************************************ 00:04:35.577 END TEST skip_rpc 00:04:35.577 ************************************ 00:04:35.577 09:27:21 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:35.577 09:27:21 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:35.577 09:27:21 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:35.577 09:27:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.577 ************************************ 00:04:35.577 START TEST skip_rpc_with_json 00:04:35.577 ************************************ 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57044 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57044 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57044 ']' 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:35.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:35.577 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.577 [2024-11-05 09:27:21.345913] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:35.577 [2024-11-05 09:27:21.346604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57044 ] 00:04:35.577 [2024-11-05 09:27:21.491369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:35.577 [2024-11-05 09:27:21.521233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.836 [2024-11-05 09:27:21.561570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 [2024-11-05 09:27:21.687346] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:35.836 request: 00:04:35.836 { 00:04:35.836 "trtype": "tcp", 00:04:35.836 "method": "nvmf_get_transports", 00:04:35.836 "req_id": 1 00:04:35.836 } 00:04:35.836 Got JSON-RPC error response 00:04:35.836 response: 00:04:35.836 { 00:04:35.836 "code": -19, 00:04:35.836 "message": "No such device" 00:04:35.836 } 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:35.836 [2024-11-05 09:27:21.703443] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:35.836 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:36.096 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:36.096 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.096 { 00:04:36.096 "subsystems": [ 00:04:36.096 { 00:04:36.096 "subsystem": "fsdev", 00:04:36.096 "config": [ 00:04:36.096 { 00:04:36.096 "method": "fsdev_set_opts", 00:04:36.096 "params": { 00:04:36.096 "fsdev_io_pool_size": 65535, 00:04:36.096 "fsdev_io_cache_size": 256 00:04:36.096 } 00:04:36.096 } 00:04:36.096 ] 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "subsystem": "keyring", 00:04:36.096 "config": [] 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "subsystem": "iobuf", 00:04:36.096 "config": [ 00:04:36.096 { 00:04:36.096 "method": "iobuf_set_options", 00:04:36.096 "params": { 00:04:36.096 "small_pool_count": 8192, 00:04:36.096 "large_pool_count": 1024, 00:04:36.096 "small_bufsize": 8192, 00:04:36.096 "large_bufsize": 135168, 00:04:36.096 "enable_numa": false 00:04:36.096 } 00:04:36.096 } 00:04:36.096 ] 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "subsystem": "sock", 00:04:36.096 "config": [ 00:04:36.096 { 00:04:36.096 "method": "sock_set_default_impl", 00:04:36.096 "params": { 00:04:36.096 "impl_name": "uring" 00:04:36.096 } 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "method": "sock_impl_set_options", 00:04:36.096 "params": { 00:04:36.096 "impl_name": "ssl", 00:04:36.096 "recv_buf_size": 4096, 00:04:36.096 "send_buf_size": 4096, 00:04:36.096 "enable_recv_pipe": true, 00:04:36.096 "enable_quickack": false, 00:04:36.096 "enable_placement_id": 0, 00:04:36.096 "enable_zerocopy_send_server": true, 00:04:36.096 "enable_zerocopy_send_client": false, 00:04:36.096 "zerocopy_threshold": 0, 00:04:36.096 "tls_version": 0, 00:04:36.096 "enable_ktls": false 00:04:36.096 } 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "method": "sock_impl_set_options", 00:04:36.096 "params": { 00:04:36.096 "impl_name": "posix", 00:04:36.096 "recv_buf_size": 2097152, 00:04:36.096 "send_buf_size": 2097152, 00:04:36.096 "enable_recv_pipe": true, 00:04:36.096 "enable_quickack": false, 00:04:36.096 "enable_placement_id": 0, 00:04:36.096 "enable_zerocopy_send_server": true, 00:04:36.096 "enable_zerocopy_send_client": false, 00:04:36.096 "zerocopy_threshold": 0, 00:04:36.096 "tls_version": 0, 00:04:36.096 "enable_ktls": false 00:04:36.096 } 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "method": "sock_impl_set_options", 00:04:36.096 "params": { 00:04:36.096 "impl_name": "uring", 00:04:36.096 "recv_buf_size": 2097152, 00:04:36.096 "send_buf_size": 2097152, 00:04:36.096 "enable_recv_pipe": true, 00:04:36.096 "enable_quickack": false, 00:04:36.096 "enable_placement_id": 0, 00:04:36.096 "enable_zerocopy_send_server": false, 00:04:36.096 "enable_zerocopy_send_client": false, 00:04:36.096 "zerocopy_threshold": 0, 00:04:36.096 "tls_version": 0, 00:04:36.096 "enable_ktls": false 00:04:36.096 } 00:04:36.096 } 00:04:36.096 ] 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "subsystem": "vmd", 00:04:36.096 "config": [] 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "subsystem": "accel", 00:04:36.096 "config": [ 00:04:36.096 { 00:04:36.096 "method": "accel_set_options", 00:04:36.096 "params": { 00:04:36.096 "small_cache_size": 128, 00:04:36.096 "large_cache_size": 16, 00:04:36.096 "task_count": 2048, 00:04:36.096 "sequence_count": 2048, 00:04:36.096 "buf_count": 2048 00:04:36.096 } 00:04:36.096 } 00:04:36.096 ] 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "subsystem": "bdev", 00:04:36.096 "config": [ 00:04:36.096 { 00:04:36.096 "method": "bdev_set_options", 00:04:36.096 "params": { 00:04:36.096 "bdev_io_pool_size": 65535, 00:04:36.096 "bdev_io_cache_size": 256, 00:04:36.096 "bdev_auto_examine": true, 00:04:36.096 "iobuf_small_cache_size": 128, 00:04:36.096 "iobuf_large_cache_size": 16 00:04:36.096 } 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "method": "bdev_raid_set_options", 00:04:36.096 "params": { 00:04:36.096 "process_window_size_kb": 1024, 00:04:36.096 "process_max_bandwidth_mb_sec": 0 00:04:36.096 } 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "method": "bdev_iscsi_set_options", 00:04:36.096 "params": { 00:04:36.096 "timeout_sec": 30 00:04:36.096 } 00:04:36.096 }, 00:04:36.096 { 00:04:36.096 "method": "bdev_nvme_set_options", 00:04:36.096 "params": { 00:04:36.096 "action_on_timeout": "none", 00:04:36.096 "timeout_us": 0, 00:04:36.096 "timeout_admin_us": 0, 00:04:36.096 "keep_alive_timeout_ms": 10000, 00:04:36.096 "arbitration_burst": 0, 00:04:36.096 "low_priority_weight": 0, 00:04:36.096 "medium_priority_weight": 0, 00:04:36.096 "high_priority_weight": 0, 00:04:36.096 "nvme_adminq_poll_period_us": 10000, 00:04:36.096 "nvme_ioq_poll_period_us": 0, 00:04:36.096 "io_queue_requests": 0, 00:04:36.096 "delay_cmd_submit": true, 00:04:36.096 "transport_retry_count": 4, 00:04:36.096 "bdev_retry_count": 3, 00:04:36.096 "transport_ack_timeout": 0, 00:04:36.096 "ctrlr_loss_timeout_sec": 0, 00:04:36.096 "reconnect_delay_sec": 0, 00:04:36.097 "fast_io_fail_timeout_sec": 0, 00:04:36.097 "disable_auto_failback": false, 00:04:36.097 "generate_uuids": false, 00:04:36.097 "transport_tos": 0, 00:04:36.097 "nvme_error_stat": false, 00:04:36.097 "rdma_srq_size": 0, 00:04:36.097 "io_path_stat": false, 00:04:36.097 "allow_accel_sequence": false, 00:04:36.097 "rdma_max_cq_size": 0, 00:04:36.097 "rdma_cm_event_timeout_ms": 0, 00:04:36.097 "dhchap_digests": [ 00:04:36.097 "sha256", 00:04:36.097 "sha384", 00:04:36.097 "sha512" 00:04:36.097 ], 00:04:36.097 "dhchap_dhgroups": [ 00:04:36.097 "null", 00:04:36.097 "ffdhe2048", 00:04:36.097 "ffdhe3072", 00:04:36.097 "ffdhe4096", 00:04:36.097 "ffdhe6144", 00:04:36.097 "ffdhe8192" 00:04:36.097 ] 00:04:36.097 } 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "method": "bdev_nvme_set_hotplug", 00:04:36.097 "params": { 00:04:36.097 "period_us": 100000, 00:04:36.097 "enable": false 00:04:36.097 } 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "method": "bdev_wait_for_examine" 00:04:36.097 } 00:04:36.097 ] 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "scsi", 00:04:36.097 "config": null 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "scheduler", 00:04:36.097 "config": [ 00:04:36.097 { 00:04:36.097 "method": "framework_set_scheduler", 00:04:36.097 "params": { 00:04:36.097 "name": "static" 00:04:36.097 } 00:04:36.097 } 00:04:36.097 ] 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "vhost_scsi", 00:04:36.097 "config": [] 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "vhost_blk", 00:04:36.097 "config": [] 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "ublk", 00:04:36.097 "config": [] 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "nbd", 00:04:36.097 "config": [] 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "nvmf", 00:04:36.097 "config": [ 00:04:36.097 { 00:04:36.097 "method": "nvmf_set_config", 00:04:36.097 "params": { 00:04:36.097 "discovery_filter": "match_any", 00:04:36.097 "admin_cmd_passthru": { 00:04:36.097 "identify_ctrlr": false 00:04:36.097 }, 00:04:36.097 "dhchap_digests": [ 00:04:36.097 "sha256", 00:04:36.097 "sha384", 00:04:36.097 "sha512" 00:04:36.097 ], 00:04:36.097 "dhchap_dhgroups": [ 00:04:36.097 "null", 00:04:36.097 "ffdhe2048", 00:04:36.097 "ffdhe3072", 00:04:36.097 "ffdhe4096", 00:04:36.097 "ffdhe6144", 00:04:36.097 "ffdhe8192" 00:04:36.097 ] 00:04:36.097 } 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "method": "nvmf_set_max_subsystems", 00:04:36.097 "params": { 00:04:36.097 "max_subsystems": 1024 00:04:36.097 } 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "method": "nvmf_set_crdt", 00:04:36.097 "params": { 00:04:36.097 "crdt1": 0, 00:04:36.097 "crdt2": 0, 00:04:36.097 "crdt3": 0 00:04:36.097 } 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "method": "nvmf_create_transport", 00:04:36.097 "params": { 00:04:36.097 "trtype": "TCP", 00:04:36.097 "max_queue_depth": 128, 00:04:36.097 "max_io_qpairs_per_ctrlr": 127, 00:04:36.097 "in_capsule_data_size": 4096, 00:04:36.097 "max_io_size": 131072, 00:04:36.097 "io_unit_size": 131072, 00:04:36.097 "max_aq_depth": 128, 00:04:36.097 "num_shared_buffers": 511, 00:04:36.097 "buf_cache_size": 4294967295, 00:04:36.097 "dif_insert_or_strip": false, 00:04:36.097 "zcopy": false, 00:04:36.097 "c2h_success": true, 00:04:36.097 "sock_priority": 0, 00:04:36.097 "abort_timeout_sec": 1, 00:04:36.097 "ack_timeout": 0, 00:04:36.097 "data_wr_pool_size": 0 00:04:36.097 } 00:04:36.097 } 00:04:36.097 ] 00:04:36.097 }, 00:04:36.097 { 00:04:36.097 "subsystem": "iscsi", 00:04:36.097 "config": [ 00:04:36.097 { 00:04:36.097 "method": "iscsi_set_options", 00:04:36.097 "params": { 00:04:36.097 "node_base": "iqn.2016-06.io.spdk", 00:04:36.097 "max_sessions": 128, 00:04:36.097 "max_connections_per_session": 2, 00:04:36.097 "max_queue_depth": 64, 00:04:36.097 "default_time2wait": 2, 00:04:36.097 "default_time2retain": 20, 00:04:36.097 "first_burst_length": 8192, 00:04:36.097 "immediate_data": true, 00:04:36.097 "allow_duplicated_isid": false, 00:04:36.097 "error_recovery_level": 0, 00:04:36.097 "nop_timeout": 60, 00:04:36.097 "nop_in_interval": 30, 00:04:36.097 "disable_chap": false, 00:04:36.097 "require_chap": false, 00:04:36.097 "mutual_chap": false, 00:04:36.097 "chap_group": 0, 00:04:36.097 "max_large_datain_per_connection": 64, 00:04:36.097 "max_r2t_per_connection": 4, 00:04:36.097 "pdu_pool_size": 36864, 00:04:36.097 "immediate_data_pool_size": 16384, 00:04:36.097 "data_out_pool_size": 2048 00:04:36.097 } 00:04:36.097 } 00:04:36.097 ] 00:04:36.097 } 00:04:36.097 ] 00:04:36.097 } 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57044 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57044 ']' 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57044 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57044 00:04:36.097 killing process with pid 57044 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57044' 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57044 00:04:36.097 09:27:21 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57044 00:04:36.356 09:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.356 09:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57064 00:04:36.356 09:27:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57064 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57064 ']' 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57064 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57064 00:04:41.624 killing process with pid 57064 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:41.624 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57064' 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57064 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57064 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:41.625 00:04:41.625 real 0m6.128s 00:04:41.625 user 0m5.867s 00:04:41.625 sys 0m0.415s 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.625 ************************************ 00:04:41.625 END TEST skip_rpc_with_json 00:04:41.625 ************************************ 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.625 09:27:27 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:41.625 09:27:27 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.625 09:27:27 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.625 09:27:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.625 ************************************ 00:04:41.625 START TEST skip_rpc_with_delay 00:04:41.625 ************************************ 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:41.625 [2024-11-05 09:27:27.510313] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:41.625 ************************************ 00:04:41.625 END TEST skip_rpc_with_delay 00:04:41.625 ************************************ 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:41.625 00:04:41.625 real 0m0.069s 00:04:41.625 user 0m0.042s 00:04:41.625 sys 0m0.025s 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:41.625 09:27:27 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:41.625 09:27:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:41.625 09:27:27 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:41.625 09:27:27 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:41.625 09:27:27 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:41.625 09:27:27 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:41.625 09:27:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.625 ************************************ 00:04:41.625 START TEST exit_on_failed_rpc_init 00:04:41.625 ************************************ 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:04:41.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57174 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57174 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57174 ']' 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:41.625 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:41.883 [2024-11-05 09:27:27.642927] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:41.883 [2024-11-05 09:27:27.643616] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57174 ] 00:04:41.883 [2024-11-05 09:27:27.791448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.883 [2024-11-05 09:27:27.820311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.142 [2024-11-05 09:27:27.856782] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:42.142 09:27:27 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:42.142 [2024-11-05 09:27:28.049090] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:42.142 [2024-11-05 09:27:28.049182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57183 ] 00:04:42.401 [2024-11-05 09:27:28.200633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.401 [2024-11-05 09:27:28.242706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:42.401 [2024-11-05 09:27:28.243064] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:42.401 [2024-11-05 09:27:28.243093] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:42.401 [2024-11-05 09:27:28.243104] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57174 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57174 ']' 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57174 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57174 00:04:42.401 killing process with pid 57174 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57174' 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57174 00:04:42.401 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57174 00:04:42.659 00:04:42.659 real 0m0.977s 00:04:42.659 user 0m1.126s 00:04:42.659 sys 0m0.267s 00:04:42.659 ************************************ 00:04:42.659 END TEST exit_on_failed_rpc_init 00:04:42.659 ************************************ 00:04:42.659 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.659 09:27:28 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:42.659 09:27:28 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:42.659 00:04:42.659 real 0m12.842s 00:04:42.659 user 0m12.229s 00:04:42.659 sys 0m1.077s 00:04:42.659 09:27:28 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.659 09:27:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.659 ************************************ 00:04:42.659 END TEST skip_rpc 00:04:42.659 ************************************ 00:04:42.919 09:27:28 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.919 09:27:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:42.919 09:27:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:42.919 09:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:42.919 ************************************ 00:04:42.919 START TEST rpc_client 00:04:42.919 ************************************ 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:42.919 * Looking for test storage... 00:04:42.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.919 09:27:28 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.919 --rc genhtml_branch_coverage=1 00:04:42.919 --rc genhtml_function_coverage=1 00:04:42.919 --rc genhtml_legend=1 00:04:42.919 --rc geninfo_all_blocks=1 00:04:42.919 --rc geninfo_unexecuted_blocks=1 00:04:42.919 00:04:42.919 ' 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.919 --rc genhtml_branch_coverage=1 00:04:42.919 --rc genhtml_function_coverage=1 00:04:42.919 --rc genhtml_legend=1 00:04:42.919 --rc geninfo_all_blocks=1 00:04:42.919 --rc geninfo_unexecuted_blocks=1 00:04:42.919 00:04:42.919 ' 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.919 --rc genhtml_branch_coverage=1 00:04:42.919 --rc genhtml_function_coverage=1 00:04:42.919 --rc genhtml_legend=1 00:04:42.919 --rc geninfo_all_blocks=1 00:04:42.919 --rc geninfo_unexecuted_blocks=1 00:04:42.919 00:04:42.919 ' 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:42.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.919 --rc genhtml_branch_coverage=1 00:04:42.919 --rc genhtml_function_coverage=1 00:04:42.919 --rc genhtml_legend=1 00:04:42.919 --rc geninfo_all_blocks=1 00:04:42.919 --rc geninfo_unexecuted_blocks=1 00:04:42.919 00:04:42.919 ' 00:04:42.919 09:27:28 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:42.919 OK 00:04:42.919 09:27:28 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:42.919 ************************************ 00:04:42.919 END TEST rpc_client 00:04:42.919 ************************************ 00:04:42.919 00:04:42.919 real 0m0.213s 00:04:42.919 user 0m0.130s 00:04:42.919 sys 0m0.090s 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:42.919 09:27:28 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:43.179 09:27:28 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.179 09:27:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:43.179 09:27:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:43.179 09:27:28 -- common/autotest_common.sh@10 -- # set +x 00:04:43.179 ************************************ 00:04:43.179 START TEST json_config 00:04:43.179 ************************************ 00:04:43.179 09:27:28 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:43.179 09:27:28 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:43.179 09:27:28 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:04:43.179 09:27:28 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:43.179 09:27:29 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.179 09:27:29 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.179 09:27:29 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.179 09:27:29 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.179 09:27:29 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.179 09:27:29 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.179 09:27:29 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.179 09:27:29 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.179 09:27:29 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:43.179 09:27:29 json_config -- scripts/common.sh@345 -- # : 1 00:04:43.179 09:27:29 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.179 09:27:29 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.179 09:27:29 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:43.179 09:27:29 json_config -- scripts/common.sh@353 -- # local d=1 00:04:43.179 09:27:29 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.179 09:27:29 json_config -- scripts/common.sh@355 -- # echo 1 00:04:43.179 09:27:29 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.179 09:27:29 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@353 -- # local d=2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.179 09:27:29 json_config -- scripts/common.sh@355 -- # echo 2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.179 09:27:29 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.179 09:27:29 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.179 09:27:29 json_config -- scripts/common.sh@368 -- # return 0 00:04:43.179 09:27:29 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.179 09:27:29 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:43.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.179 --rc genhtml_branch_coverage=1 00:04:43.179 --rc genhtml_function_coverage=1 00:04:43.179 --rc genhtml_legend=1 00:04:43.179 --rc geninfo_all_blocks=1 00:04:43.179 --rc geninfo_unexecuted_blocks=1 00:04:43.179 00:04:43.179 ' 00:04:43.179 09:27:29 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:43.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.179 --rc genhtml_branch_coverage=1 00:04:43.179 --rc genhtml_function_coverage=1 00:04:43.179 --rc genhtml_legend=1 00:04:43.179 --rc geninfo_all_blocks=1 00:04:43.179 --rc geninfo_unexecuted_blocks=1 00:04:43.179 00:04:43.179 ' 00:04:43.179 09:27:29 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:43.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.179 --rc genhtml_branch_coverage=1 00:04:43.179 --rc genhtml_function_coverage=1 00:04:43.179 --rc genhtml_legend=1 00:04:43.179 --rc geninfo_all_blocks=1 00:04:43.179 --rc geninfo_unexecuted_blocks=1 00:04:43.179 00:04:43.179 ' 00:04:43.179 09:27:29 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:43.179 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.179 --rc genhtml_branch_coverage=1 00:04:43.179 --rc genhtml_function_coverage=1 00:04:43.179 --rc genhtml_legend=1 00:04:43.179 --rc geninfo_all_blocks=1 00:04:43.179 --rc geninfo_unexecuted_blocks=1 00:04:43.179 00:04:43.179 ' 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.179 09:27:29 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:43.179 09:27:29 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.179 09:27:29 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.179 09:27:29 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.179 09:27:29 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.179 09:27:29 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.179 09:27:29 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.179 09:27:29 json_config -- paths/export.sh@5 -- # export PATH 00:04:43.179 09:27:29 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@51 -- # : 0 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:43.179 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:43.179 09:27:29 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:43.179 09:27:29 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:43.180 INFO: JSON configuration test init 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.180 Waiting for target to run... 00:04:43.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:43.180 09:27:29 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:43.180 09:27:29 json_config -- json_config/common.sh@9 -- # local app=target 00:04:43.180 09:27:29 json_config -- json_config/common.sh@10 -- # shift 00:04:43.180 09:27:29 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:43.180 09:27:29 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:43.180 09:27:29 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:43.180 09:27:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.180 09:27:29 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:43.180 09:27:29 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57318 00:04:43.180 09:27:29 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:43.180 09:27:29 json_config -- json_config/common.sh@25 -- # waitforlisten 57318 /var/tmp/spdk_tgt.sock 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@833 -- # '[' -z 57318 ']' 00:04:43.180 09:27:29 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:43.180 09:27:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:43.438 [2024-11-05 09:27:29.190633] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:43.438 [2024-11-05 09:27:29.190949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57318 ] 00:04:43.696 [2024-11-05 09:27:29.493075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.696 [2024-11-05 09:27:29.514292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.262 09:27:30 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:44.262 09:27:30 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:44.262 09:27:30 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.262 00:04:44.262 09:27:30 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:44.262 09:27:30 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:44.262 09:27:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.521 09:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.521 09:27:30 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:44.521 09:27:30 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:44.521 09:27:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:44.521 09:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:44.521 09:27:30 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:44.521 09:27:30 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:44.521 09:27:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:44.780 [2024-11-05 09:27:30.583775] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:45.039 09:27:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.039 09:27:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:45.039 09:27:30 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:45.039 09:27:30 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@54 -- # sort 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:45.298 09:27:31 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.298 09:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:45.298 09:27:31 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.298 09:27:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:45.298 09:27:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.298 09:27:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:45.558 MallocForNvmf0 00:04:45.558 09:27:31 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.558 09:27:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:45.816 MallocForNvmf1 00:04:45.816 09:27:31 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:45.816 09:27:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:46.074 [2024-11-05 09:27:31.953807] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:46.074 09:27:31 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.074 09:27:31 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:46.332 09:27:32 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.332 09:27:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:46.591 09:27:32 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.591 09:27:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:46.851 09:27:32 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:46.851 09:27:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.109 [2024-11-05 09:27:32.866266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.109 09:27:32 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:47.109 09:27:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.109 09:27:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.109 09:27:32 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:47.109 09:27:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.109 09:27:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.109 09:27:32 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:47.109 09:27:32 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.109 09:27:32 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.368 MallocBdevForConfigChangeCheck 00:04:47.368 09:27:33 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:47.368 09:27:33 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.368 09:27:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.368 09:27:33 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:47.368 09:27:33 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.936 INFO: shutting down applications... 00:04:47.936 09:27:33 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:47.936 09:27:33 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:47.936 09:27:33 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:47.936 09:27:33 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:47.936 09:27:33 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:48.203 Calling clear_iscsi_subsystem 00:04:48.203 Calling clear_nvmf_subsystem 00:04:48.203 Calling clear_nbd_subsystem 00:04:48.203 Calling clear_ublk_subsystem 00:04:48.203 Calling clear_vhost_blk_subsystem 00:04:48.203 Calling clear_vhost_scsi_subsystem 00:04:48.203 Calling clear_bdev_subsystem 00:04:48.203 09:27:34 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:48.203 09:27:34 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:48.203 09:27:34 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:48.203 09:27:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:48.203 09:27:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:48.203 09:27:34 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:48.823 09:27:34 json_config -- json_config/json_config.sh@352 -- # break 00:04:48.823 09:27:34 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:48.823 09:27:34 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:48.823 09:27:34 json_config -- json_config/common.sh@31 -- # local app=target 00:04:48.823 09:27:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.823 09:27:34 json_config -- json_config/common.sh@35 -- # [[ -n 57318 ]] 00:04:48.823 09:27:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 57318 00:04:48.823 09:27:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.823 09:27:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.823 09:27:34 json_config -- json_config/common.sh@41 -- # kill -0 57318 00:04:48.823 09:27:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.083 09:27:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.083 SPDK target shutdown done 00:04:49.083 INFO: relaunching applications... 00:04:49.083 09:27:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.083 09:27:34 json_config -- json_config/common.sh@41 -- # kill -0 57318 00:04:49.083 09:27:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.083 09:27:34 json_config -- json_config/common.sh@43 -- # break 00:04:49.083 09:27:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.083 09:27:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.083 09:27:34 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:49.083 09:27:34 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.083 09:27:34 json_config -- json_config/common.sh@9 -- # local app=target 00:04:49.083 09:27:34 json_config -- json_config/common.sh@10 -- # shift 00:04:49.083 09:27:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.083 09:27:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.083 09:27:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.083 09:27:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.083 09:27:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.083 09:27:34 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:49.083 09:27:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=57514 00:04:49.083 09:27:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.083 Waiting for target to run... 00:04:49.083 09:27:34 json_config -- json_config/common.sh@25 -- # waitforlisten 57514 /var/tmp/spdk_tgt.sock 00:04:49.083 09:27:34 json_config -- common/autotest_common.sh@833 -- # '[' -z 57514 ']' 00:04:49.083 09:27:34 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.083 09:27:34 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:49.083 09:27:34 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.083 09:27:34 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:49.083 09:27:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.342 [2024-11-05 09:27:35.047039] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:49.342 [2024-11-05 09:27:35.047155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57514 ] 00:04:49.601 [2024-11-05 09:27:35.374776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.601 [2024-11-05 09:27:35.396625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.601 [2024-11-05 09:27:35.526602] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:49.860 [2024-11-05 09:27:35.720922] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:49.860 [2024-11-05 09:27:35.752946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:50.119 00:04:50.119 INFO: Checking if target configuration is the same... 00:04:50.119 09:27:36 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:50.119 09:27:36 json_config -- common/autotest_common.sh@866 -- # return 0 00:04:50.119 09:27:36 json_config -- json_config/common.sh@26 -- # echo '' 00:04:50.119 09:27:36 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:50.119 09:27:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:50.119 09:27:36 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:50.119 09:27:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.119 09:27:36 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.119 + '[' 2 -ne 2 ']' 00:04:50.119 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:50.119 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:50.119 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:50.119 +++ basename /dev/fd/62 00:04:50.119 ++ mktemp /tmp/62.XXX 00:04:50.119 + tmp_file_1=/tmp/62.3Dh 00:04:50.119 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.119 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.119 + tmp_file_2=/tmp/spdk_tgt_config.json.eup 00:04:50.119 + ret=0 00:04:50.119 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.687 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:50.687 + diff -u /tmp/62.3Dh /tmp/spdk_tgt_config.json.eup 00:04:50.687 INFO: JSON config files are the same 00:04:50.687 + echo 'INFO: JSON config files are the same' 00:04:50.687 + rm /tmp/62.3Dh /tmp/spdk_tgt_config.json.eup 00:04:50.687 + exit 0 00:04:50.687 INFO: changing configuration and checking if this can be detected... 00:04:50.687 09:27:36 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:50.687 09:27:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:50.687 09:27:36 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.687 09:27:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:50.946 09:27:36 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.946 09:27:36 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:50.946 09:27:36 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:50.946 + '[' 2 -ne 2 ']' 00:04:50.946 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:50.946 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:50.946 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:50.946 +++ basename /dev/fd/62 00:04:50.946 ++ mktemp /tmp/62.XXX 00:04:50.946 + tmp_file_1=/tmp/62.iPp 00:04:50.946 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:50.946 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:50.946 + tmp_file_2=/tmp/spdk_tgt_config.json.aq0 00:04:50.947 + ret=0 00:04:50.947 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:51.515 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:51.515 + diff -u /tmp/62.iPp /tmp/spdk_tgt_config.json.aq0 00:04:51.515 + ret=1 00:04:51.515 + echo '=== Start of file: /tmp/62.iPp ===' 00:04:51.515 + cat /tmp/62.iPp 00:04:51.516 + echo '=== End of file: /tmp/62.iPp ===' 00:04:51.516 + echo '' 00:04:51.516 + echo '=== Start of file: /tmp/spdk_tgt_config.json.aq0 ===' 00:04:51.516 + cat /tmp/spdk_tgt_config.json.aq0 00:04:51.516 + echo '=== End of file: /tmp/spdk_tgt_config.json.aq0 ===' 00:04:51.516 + echo '' 00:04:51.516 + rm /tmp/62.iPp /tmp/spdk_tgt_config.json.aq0 00:04:51.516 + exit 1 00:04:51.516 INFO: configuration change detected. 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@324 -- # [[ -n 57514 ]] 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.516 09:27:37 json_config -- json_config/json_config.sh@330 -- # killprocess 57514 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@952 -- # '[' -z 57514 ']' 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@956 -- # kill -0 57514 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@957 -- # uname 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57514 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:51.516 killing process with pid 57514 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57514' 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@971 -- # kill 57514 00:04:51.516 09:27:37 json_config -- common/autotest_common.sh@976 -- # wait 57514 00:04:51.775 09:27:37 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:51.775 09:27:37 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:51.775 09:27:37 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:51.775 09:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.775 09:27:37 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:51.775 INFO: Success 00:04:51.775 09:27:37 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:51.775 00:04:51.775 real 0m8.641s 00:04:51.775 user 0m12.644s 00:04:51.775 sys 0m1.440s 00:04:51.775 09:27:37 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:51.775 09:27:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:51.775 ************************************ 00:04:51.775 END TEST json_config 00:04:51.775 ************************************ 00:04:51.775 09:27:37 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.775 09:27:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:51.775 09:27:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:51.775 09:27:37 -- common/autotest_common.sh@10 -- # set +x 00:04:51.775 ************************************ 00:04:51.775 START TEST json_config_extra_key 00:04:51.775 ************************************ 00:04:51.775 09:27:37 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:51.775 09:27:37 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:51.775 09:27:37 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:51.775 09:27:37 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:04:52.035 09:27:37 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:52.035 09:27:37 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.035 09:27:37 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:52.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.035 --rc genhtml_branch_coverage=1 00:04:52.035 --rc genhtml_function_coverage=1 00:04:52.035 --rc genhtml_legend=1 00:04:52.035 --rc geninfo_all_blocks=1 00:04:52.035 --rc geninfo_unexecuted_blocks=1 00:04:52.035 00:04:52.035 ' 00:04:52.035 09:27:37 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:52.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.035 --rc genhtml_branch_coverage=1 00:04:52.035 --rc genhtml_function_coverage=1 00:04:52.035 --rc genhtml_legend=1 00:04:52.035 --rc geninfo_all_blocks=1 00:04:52.035 --rc geninfo_unexecuted_blocks=1 00:04:52.035 00:04:52.035 ' 00:04:52.035 09:27:37 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:52.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.035 --rc genhtml_branch_coverage=1 00:04:52.035 --rc genhtml_function_coverage=1 00:04:52.035 --rc genhtml_legend=1 00:04:52.035 --rc geninfo_all_blocks=1 00:04:52.035 --rc geninfo_unexecuted_blocks=1 00:04:52.035 00:04:52.035 ' 00:04:52.035 09:27:37 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:52.035 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.035 --rc genhtml_branch_coverage=1 00:04:52.035 --rc genhtml_function_coverage=1 00:04:52.035 --rc genhtml_legend=1 00:04:52.035 --rc geninfo_all_blocks=1 00:04:52.035 --rc geninfo_unexecuted_blocks=1 00:04:52.035 00:04:52.035 ' 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:52.035 09:27:37 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:52.035 09:27:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.035 09:27:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.035 09:27:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.035 09:27:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:52.035 09:27:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:52.035 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:52.035 09:27:37 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:52.035 INFO: launching applications... 00:04:52.035 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:52.036 09:27:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57668 00:04:52.036 Waiting for target to run... 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57668 /var/tmp/spdk_tgt.sock 00:04:52.036 09:27:37 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:52.036 09:27:37 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57668 ']' 00:04:52.036 09:27:37 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:52.036 09:27:37 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:52.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:52.036 09:27:37 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:52.036 09:27:37 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:52.036 09:27:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:52.036 [2024-11-05 09:27:37.855525] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:52.036 [2024-11-05 09:27:37.855628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57668 ] 00:04:52.295 [2024-11-05 09:27:38.173437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.295 [2024-11-05 09:27:38.194173] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.295 [2024-11-05 09:27:38.218632] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:53.231 09:27:38 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:53.231 09:27:38 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:04:53.231 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:53.231 INFO: shutting down applications... 00:04:53.231 09:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:53.231 09:27:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57668 ]] 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57668 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:53.231 09:27:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:53.491 09:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:53.491 09:27:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:53.491 09:27:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57668 00:04:53.491 SPDK target shutdown done 00:04:53.491 09:27:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:53.491 09:27:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:53.491 09:27:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:53.491 09:27:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:53.491 Success 00:04:53.491 09:27:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:53.491 ************************************ 00:04:53.491 END TEST json_config_extra_key 00:04:53.491 ************************************ 00:04:53.491 00:04:53.491 real 0m1.786s 00:04:53.491 user 0m1.661s 00:04:53.491 sys 0m0.318s 00:04:53.491 09:27:39 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:53.491 09:27:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:53.491 09:27:39 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.491 09:27:39 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:53.491 09:27:39 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:53.491 09:27:39 -- common/autotest_common.sh@10 -- # set +x 00:04:53.491 ************************************ 00:04:53.491 START TEST alias_rpc 00:04:53.491 ************************************ 00:04:53.491 09:27:39 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:53.750 * Looking for test storage... 00:04:53.750 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.750 09:27:39 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:53.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.750 --rc genhtml_branch_coverage=1 00:04:53.750 --rc genhtml_function_coverage=1 00:04:53.750 --rc genhtml_legend=1 00:04:53.750 --rc geninfo_all_blocks=1 00:04:53.750 --rc geninfo_unexecuted_blocks=1 00:04:53.750 00:04:53.750 ' 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:53.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.750 --rc genhtml_branch_coverage=1 00:04:53.750 --rc genhtml_function_coverage=1 00:04:53.750 --rc genhtml_legend=1 00:04:53.750 --rc geninfo_all_blocks=1 00:04:53.750 --rc geninfo_unexecuted_blocks=1 00:04:53.750 00:04:53.750 ' 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:53.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.750 --rc genhtml_branch_coverage=1 00:04:53.750 --rc genhtml_function_coverage=1 00:04:53.750 --rc genhtml_legend=1 00:04:53.750 --rc geninfo_all_blocks=1 00:04:53.750 --rc geninfo_unexecuted_blocks=1 00:04:53.750 00:04:53.750 ' 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:53.750 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.750 --rc genhtml_branch_coverage=1 00:04:53.750 --rc genhtml_function_coverage=1 00:04:53.750 --rc genhtml_legend=1 00:04:53.750 --rc geninfo_all_blocks=1 00:04:53.750 --rc geninfo_unexecuted_blocks=1 00:04:53.750 00:04:53.750 ' 00:04:53.750 09:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:53.750 09:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57740 00:04:53.750 09:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57740 00:04:53.750 09:27:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57740 ']' 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:53.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:53.750 09:27:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:53.750 [2024-11-05 09:27:39.692301] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:53.750 [2024-11-05 09:27:39.692552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57740 ] 00:04:54.009 [2024-11-05 09:27:39.833455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.009 [2024-11-05 09:27:39.862889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.009 [2024-11-05 09:27:39.899928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:54.946 09:27:40 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:54.946 09:27:40 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:04:54.946 09:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:55.205 09:27:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57740 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57740 ']' 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57740 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57740 00:04:55.205 killing process with pid 57740 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57740' 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@971 -- # kill 57740 00:04:55.205 09:27:40 alias_rpc -- common/autotest_common.sh@976 -- # wait 57740 00:04:55.465 00:04:55.465 real 0m1.741s 00:04:55.465 user 0m2.084s 00:04:55.465 sys 0m0.326s 00:04:55.465 09:27:41 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.465 09:27:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.465 ************************************ 00:04:55.465 END TEST alias_rpc 00:04:55.465 ************************************ 00:04:55.465 09:27:41 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:55.465 09:27:41 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:55.465 09:27:41 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.465 09:27:41 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.465 09:27:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.465 ************************************ 00:04:55.465 START TEST spdkcli_tcp 00:04:55.465 ************************************ 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:55.465 * Looking for test storage... 00:04:55.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.465 09:27:41 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.465 --rc genhtml_branch_coverage=1 00:04:55.465 --rc genhtml_function_coverage=1 00:04:55.465 --rc genhtml_legend=1 00:04:55.465 --rc geninfo_all_blocks=1 00:04:55.465 --rc geninfo_unexecuted_blocks=1 00:04:55.465 00:04:55.465 ' 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.465 --rc genhtml_branch_coverage=1 00:04:55.465 --rc genhtml_function_coverage=1 00:04:55.465 --rc genhtml_legend=1 00:04:55.465 --rc geninfo_all_blocks=1 00:04:55.465 --rc geninfo_unexecuted_blocks=1 00:04:55.465 00:04:55.465 ' 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.465 --rc genhtml_branch_coverage=1 00:04:55.465 --rc genhtml_function_coverage=1 00:04:55.465 --rc genhtml_legend=1 00:04:55.465 --rc geninfo_all_blocks=1 00:04:55.465 --rc geninfo_unexecuted_blocks=1 00:04:55.465 00:04:55.465 ' 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.465 --rc genhtml_branch_coverage=1 00:04:55.465 --rc genhtml_function_coverage=1 00:04:55.465 --rc genhtml_legend=1 00:04:55.465 --rc geninfo_all_blocks=1 00:04:55.465 --rc geninfo_unexecuted_blocks=1 00:04:55.465 00:04:55.465 ' 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57819 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:55.465 09:27:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57819 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57819 ']' 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.465 09:27:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:55.725 [2024-11-05 09:27:41.477024] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:55.725 [2024-11-05 09:27:41.477127] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57819 ] 00:04:55.725 [2024-11-05 09:27:41.618239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:55.725 [2024-11-05 09:27:41.650924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:55.725 [2024-11-05 09:27:41.650935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.984 [2024-11-05 09:27:41.692817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:56.552 09:27:42 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.552 09:27:42 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:04:56.552 09:27:42 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57836 00:04:56.552 09:27:42 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:56.552 09:27:42 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:56.813 [ 00:04:56.813 "bdev_malloc_delete", 00:04:56.813 "bdev_malloc_create", 00:04:56.813 "bdev_null_resize", 00:04:56.813 "bdev_null_delete", 00:04:56.813 "bdev_null_create", 00:04:56.813 "bdev_nvme_cuse_unregister", 00:04:56.813 "bdev_nvme_cuse_register", 00:04:56.813 "bdev_opal_new_user", 00:04:56.813 "bdev_opal_set_lock_state", 00:04:56.813 "bdev_opal_delete", 00:04:56.813 "bdev_opal_get_info", 00:04:56.813 "bdev_opal_create", 00:04:56.813 "bdev_nvme_opal_revert", 00:04:56.813 "bdev_nvme_opal_init", 00:04:56.813 "bdev_nvme_send_cmd", 00:04:56.813 "bdev_nvme_set_keys", 00:04:56.813 "bdev_nvme_get_path_iostat", 00:04:56.813 "bdev_nvme_get_mdns_discovery_info", 00:04:56.813 "bdev_nvme_stop_mdns_discovery", 00:04:56.813 "bdev_nvme_start_mdns_discovery", 00:04:56.813 "bdev_nvme_set_multipath_policy", 00:04:56.813 "bdev_nvme_set_preferred_path", 00:04:56.813 "bdev_nvme_get_io_paths", 00:04:56.813 "bdev_nvme_remove_error_injection", 00:04:56.813 "bdev_nvme_add_error_injection", 00:04:56.813 "bdev_nvme_get_discovery_info", 00:04:56.813 "bdev_nvme_stop_discovery", 00:04:56.813 "bdev_nvme_start_discovery", 00:04:56.813 "bdev_nvme_get_controller_health_info", 00:04:56.813 "bdev_nvme_disable_controller", 00:04:56.813 "bdev_nvme_enable_controller", 00:04:56.813 "bdev_nvme_reset_controller", 00:04:56.813 "bdev_nvme_get_transport_statistics", 00:04:56.813 "bdev_nvme_apply_firmware", 00:04:56.813 "bdev_nvme_detach_controller", 00:04:56.813 "bdev_nvme_get_controllers", 00:04:56.813 "bdev_nvme_attach_controller", 00:04:56.813 "bdev_nvme_set_hotplug", 00:04:56.813 "bdev_nvme_set_options", 00:04:56.813 "bdev_passthru_delete", 00:04:56.813 "bdev_passthru_create", 00:04:56.813 "bdev_lvol_set_parent_bdev", 00:04:56.813 "bdev_lvol_set_parent", 00:04:56.813 "bdev_lvol_check_shallow_copy", 00:04:56.813 "bdev_lvol_start_shallow_copy", 00:04:56.813 "bdev_lvol_grow_lvstore", 00:04:56.813 "bdev_lvol_get_lvols", 00:04:56.813 "bdev_lvol_get_lvstores", 00:04:56.813 "bdev_lvol_delete", 00:04:56.813 "bdev_lvol_set_read_only", 00:04:56.813 "bdev_lvol_resize", 00:04:56.813 "bdev_lvol_decouple_parent", 00:04:56.813 "bdev_lvol_inflate", 00:04:56.813 "bdev_lvol_rename", 00:04:56.813 "bdev_lvol_clone_bdev", 00:04:56.813 "bdev_lvol_clone", 00:04:56.813 "bdev_lvol_snapshot", 00:04:56.813 "bdev_lvol_create", 00:04:56.813 "bdev_lvol_delete_lvstore", 00:04:56.813 "bdev_lvol_rename_lvstore", 00:04:56.813 "bdev_lvol_create_lvstore", 00:04:56.813 "bdev_raid_set_options", 00:04:56.813 "bdev_raid_remove_base_bdev", 00:04:56.813 "bdev_raid_add_base_bdev", 00:04:56.813 "bdev_raid_delete", 00:04:56.813 "bdev_raid_create", 00:04:56.813 "bdev_raid_get_bdevs", 00:04:56.813 "bdev_error_inject_error", 00:04:56.813 "bdev_error_delete", 00:04:56.813 "bdev_error_create", 00:04:56.813 "bdev_split_delete", 00:04:56.813 "bdev_split_create", 00:04:56.813 "bdev_delay_delete", 00:04:56.813 "bdev_delay_create", 00:04:56.813 "bdev_delay_update_latency", 00:04:56.813 "bdev_zone_block_delete", 00:04:56.813 "bdev_zone_block_create", 00:04:56.813 "blobfs_create", 00:04:56.813 "blobfs_detect", 00:04:56.813 "blobfs_set_cache_size", 00:04:56.813 "bdev_aio_delete", 00:04:56.813 "bdev_aio_rescan", 00:04:56.813 "bdev_aio_create", 00:04:56.813 "bdev_ftl_set_property", 00:04:56.813 "bdev_ftl_get_properties", 00:04:56.813 "bdev_ftl_get_stats", 00:04:56.813 "bdev_ftl_unmap", 00:04:56.813 "bdev_ftl_unload", 00:04:56.813 "bdev_ftl_delete", 00:04:56.813 "bdev_ftl_load", 00:04:56.813 "bdev_ftl_create", 00:04:56.813 "bdev_virtio_attach_controller", 00:04:56.813 "bdev_virtio_scsi_get_devices", 00:04:56.813 "bdev_virtio_detach_controller", 00:04:56.813 "bdev_virtio_blk_set_hotplug", 00:04:56.813 "bdev_iscsi_delete", 00:04:56.813 "bdev_iscsi_create", 00:04:56.813 "bdev_iscsi_set_options", 00:04:56.813 "bdev_uring_delete", 00:04:56.813 "bdev_uring_rescan", 00:04:56.813 "bdev_uring_create", 00:04:56.813 "accel_error_inject_error", 00:04:56.813 "ioat_scan_accel_module", 00:04:56.813 "dsa_scan_accel_module", 00:04:56.813 "iaa_scan_accel_module", 00:04:56.813 "keyring_file_remove_key", 00:04:56.813 "keyring_file_add_key", 00:04:56.813 "keyring_linux_set_options", 00:04:56.813 "fsdev_aio_delete", 00:04:56.813 "fsdev_aio_create", 00:04:56.813 "iscsi_get_histogram", 00:04:56.813 "iscsi_enable_histogram", 00:04:56.813 "iscsi_set_options", 00:04:56.813 "iscsi_get_auth_groups", 00:04:56.813 "iscsi_auth_group_remove_secret", 00:04:56.813 "iscsi_auth_group_add_secret", 00:04:56.813 "iscsi_delete_auth_group", 00:04:56.813 "iscsi_create_auth_group", 00:04:56.813 "iscsi_set_discovery_auth", 00:04:56.813 "iscsi_get_options", 00:04:56.813 "iscsi_target_node_request_logout", 00:04:56.813 "iscsi_target_node_set_redirect", 00:04:56.813 "iscsi_target_node_set_auth", 00:04:56.813 "iscsi_target_node_add_lun", 00:04:56.813 "iscsi_get_stats", 00:04:56.813 "iscsi_get_connections", 00:04:56.813 "iscsi_portal_group_set_auth", 00:04:56.813 "iscsi_start_portal_group", 00:04:56.813 "iscsi_delete_portal_group", 00:04:56.813 "iscsi_create_portal_group", 00:04:56.813 "iscsi_get_portal_groups", 00:04:56.813 "iscsi_delete_target_node", 00:04:56.813 "iscsi_target_node_remove_pg_ig_maps", 00:04:56.813 "iscsi_target_node_add_pg_ig_maps", 00:04:56.813 "iscsi_create_target_node", 00:04:56.813 "iscsi_get_target_nodes", 00:04:56.813 "iscsi_delete_initiator_group", 00:04:56.813 "iscsi_initiator_group_remove_initiators", 00:04:56.813 "iscsi_initiator_group_add_initiators", 00:04:56.813 "iscsi_create_initiator_group", 00:04:56.813 "iscsi_get_initiator_groups", 00:04:56.813 "nvmf_set_crdt", 00:04:56.813 "nvmf_set_config", 00:04:56.813 "nvmf_set_max_subsystems", 00:04:56.813 "nvmf_stop_mdns_prr", 00:04:56.813 "nvmf_publish_mdns_prr", 00:04:56.813 "nvmf_subsystem_get_listeners", 00:04:56.813 "nvmf_subsystem_get_qpairs", 00:04:56.813 "nvmf_subsystem_get_controllers", 00:04:56.813 "nvmf_get_stats", 00:04:56.813 "nvmf_get_transports", 00:04:56.813 "nvmf_create_transport", 00:04:56.813 "nvmf_get_targets", 00:04:56.813 "nvmf_delete_target", 00:04:56.814 "nvmf_create_target", 00:04:56.814 "nvmf_subsystem_allow_any_host", 00:04:56.814 "nvmf_subsystem_set_keys", 00:04:56.814 "nvmf_subsystem_remove_host", 00:04:56.814 "nvmf_subsystem_add_host", 00:04:56.814 "nvmf_ns_remove_host", 00:04:56.814 "nvmf_ns_add_host", 00:04:56.814 "nvmf_subsystem_remove_ns", 00:04:56.814 "nvmf_subsystem_set_ns_ana_group", 00:04:56.814 "nvmf_subsystem_add_ns", 00:04:56.814 "nvmf_subsystem_listener_set_ana_state", 00:04:56.814 "nvmf_discovery_get_referrals", 00:04:56.814 "nvmf_discovery_remove_referral", 00:04:56.814 "nvmf_discovery_add_referral", 00:04:56.814 "nvmf_subsystem_remove_listener", 00:04:56.814 "nvmf_subsystem_add_listener", 00:04:56.814 "nvmf_delete_subsystem", 00:04:56.814 "nvmf_create_subsystem", 00:04:56.814 "nvmf_get_subsystems", 00:04:56.814 "env_dpdk_get_mem_stats", 00:04:56.814 "nbd_get_disks", 00:04:56.814 "nbd_stop_disk", 00:04:56.814 "nbd_start_disk", 00:04:56.814 "ublk_recover_disk", 00:04:56.814 "ublk_get_disks", 00:04:56.814 "ublk_stop_disk", 00:04:56.814 "ublk_start_disk", 00:04:56.814 "ublk_destroy_target", 00:04:56.814 "ublk_create_target", 00:04:56.814 "virtio_blk_create_transport", 00:04:56.814 "virtio_blk_get_transports", 00:04:56.814 "vhost_controller_set_coalescing", 00:04:56.814 "vhost_get_controllers", 00:04:56.814 "vhost_delete_controller", 00:04:56.814 "vhost_create_blk_controller", 00:04:56.814 "vhost_scsi_controller_remove_target", 00:04:56.814 "vhost_scsi_controller_add_target", 00:04:56.814 "vhost_start_scsi_controller", 00:04:56.814 "vhost_create_scsi_controller", 00:04:56.814 "thread_set_cpumask", 00:04:56.814 "scheduler_set_options", 00:04:56.814 "framework_get_governor", 00:04:56.814 "framework_get_scheduler", 00:04:56.814 "framework_set_scheduler", 00:04:56.814 "framework_get_reactors", 00:04:56.814 "thread_get_io_channels", 00:04:56.814 "thread_get_pollers", 00:04:56.814 "thread_get_stats", 00:04:56.814 "framework_monitor_context_switch", 00:04:56.814 "spdk_kill_instance", 00:04:56.814 "log_enable_timestamps", 00:04:56.814 "log_get_flags", 00:04:56.814 "log_clear_flag", 00:04:56.814 "log_set_flag", 00:04:56.814 "log_get_level", 00:04:56.814 "log_set_level", 00:04:56.814 "log_get_print_level", 00:04:56.814 "log_set_print_level", 00:04:56.814 "framework_enable_cpumask_locks", 00:04:56.814 "framework_disable_cpumask_locks", 00:04:56.814 "framework_wait_init", 00:04:56.814 "framework_start_init", 00:04:56.814 "scsi_get_devices", 00:04:56.814 "bdev_get_histogram", 00:04:56.814 "bdev_enable_histogram", 00:04:56.814 "bdev_set_qos_limit", 00:04:56.814 "bdev_set_qd_sampling_period", 00:04:56.814 "bdev_get_bdevs", 00:04:56.814 "bdev_reset_iostat", 00:04:56.814 "bdev_get_iostat", 00:04:56.814 "bdev_examine", 00:04:56.814 "bdev_wait_for_examine", 00:04:56.814 "bdev_set_options", 00:04:56.814 "accel_get_stats", 00:04:56.814 "accel_set_options", 00:04:56.814 "accel_set_driver", 00:04:56.814 "accel_crypto_key_destroy", 00:04:56.814 "accel_crypto_keys_get", 00:04:56.814 "accel_crypto_key_create", 00:04:56.814 "accel_assign_opc", 00:04:56.814 "accel_get_module_info", 00:04:56.814 "accel_get_opc_assignments", 00:04:56.814 "vmd_rescan", 00:04:56.814 "vmd_remove_device", 00:04:56.814 "vmd_enable", 00:04:56.814 "sock_get_default_impl", 00:04:56.814 "sock_set_default_impl", 00:04:56.814 "sock_impl_set_options", 00:04:56.814 "sock_impl_get_options", 00:04:56.814 "iobuf_get_stats", 00:04:56.814 "iobuf_set_options", 00:04:56.814 "keyring_get_keys", 00:04:56.814 "framework_get_pci_devices", 00:04:56.814 "framework_get_config", 00:04:56.814 "framework_get_subsystems", 00:04:56.814 "fsdev_set_opts", 00:04:56.814 "fsdev_get_opts", 00:04:56.814 "trace_get_info", 00:04:56.814 "trace_get_tpoint_group_mask", 00:04:56.814 "trace_disable_tpoint_group", 00:04:56.814 "trace_enable_tpoint_group", 00:04:56.814 "trace_clear_tpoint_mask", 00:04:56.814 "trace_set_tpoint_mask", 00:04:56.814 "notify_get_notifications", 00:04:56.814 "notify_get_types", 00:04:56.814 "spdk_get_version", 00:04:56.814 "rpc_get_methods" 00:04:56.814 ] 00:04:56.814 09:27:42 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:56.814 09:27:42 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.814 09:27:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:56.814 09:27:42 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:56.814 09:27:42 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57819 00:04:56.814 09:27:42 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57819 ']' 00:04:56.814 09:27:42 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57819 00:04:56.814 09:27:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:04:56.814 09:27:42 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.073 09:27:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57819 00:04:57.073 09:27:42 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.073 09:27:42 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.073 killing process with pid 57819 00:04:57.073 09:27:42 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57819' 00:04:57.073 09:27:42 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57819 00:04:57.073 09:27:42 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57819 00:04:57.073 00:04:57.073 real 0m1.790s 00:04:57.073 user 0m3.463s 00:04:57.073 sys 0m0.395s 00:04:57.073 09:27:43 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.073 09:27:43 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:57.073 ************************************ 00:04:57.073 END TEST spdkcli_tcp 00:04:57.073 ************************************ 00:04:57.333 09:27:43 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.333 09:27:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.333 09:27:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.333 09:27:43 -- common/autotest_common.sh@10 -- # set +x 00:04:57.333 ************************************ 00:04:57.333 START TEST dpdk_mem_utility 00:04:57.333 ************************************ 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:57.333 * Looking for test storage... 00:04:57.333 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.333 09:27:43 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:57.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.333 --rc genhtml_branch_coverage=1 00:04:57.333 --rc genhtml_function_coverage=1 00:04:57.333 --rc genhtml_legend=1 00:04:57.333 --rc geninfo_all_blocks=1 00:04:57.333 --rc geninfo_unexecuted_blocks=1 00:04:57.333 00:04:57.333 ' 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:57.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.333 --rc genhtml_branch_coverage=1 00:04:57.333 --rc genhtml_function_coverage=1 00:04:57.333 --rc genhtml_legend=1 00:04:57.333 --rc geninfo_all_blocks=1 00:04:57.333 --rc geninfo_unexecuted_blocks=1 00:04:57.333 00:04:57.333 ' 00:04:57.333 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:57.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.333 --rc genhtml_branch_coverage=1 00:04:57.334 --rc genhtml_function_coverage=1 00:04:57.334 --rc genhtml_legend=1 00:04:57.334 --rc geninfo_all_blocks=1 00:04:57.334 --rc geninfo_unexecuted_blocks=1 00:04:57.334 00:04:57.334 ' 00:04:57.334 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:57.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.334 --rc genhtml_branch_coverage=1 00:04:57.334 --rc genhtml_function_coverage=1 00:04:57.334 --rc genhtml_legend=1 00:04:57.334 --rc geninfo_all_blocks=1 00:04:57.334 --rc geninfo_unexecuted_blocks=1 00:04:57.334 00:04:57.334 ' 00:04:57.334 09:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:57.334 09:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57918 00:04:57.334 09:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57918 00:04:57.334 09:27:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:57.334 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 57918 ']' 00:04:57.334 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.334 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:57.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.334 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.334 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:57.334 09:27:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:57.334 [2024-11-05 09:27:43.276865] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:57.334 [2024-11-05 09:27:43.276987] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57918 ] 00:04:57.593 [2024-11-05 09:27:43.418222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.593 [2024-11-05 09:27:43.451585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.593 [2024-11-05 09:27:43.488999] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:04:58.531 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:58.531 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:04:58.531 09:27:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:58.531 09:27:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:58.531 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.531 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.531 { 00:04:58.531 "filename": "/tmp/spdk_mem_dump.txt" 00:04:58.531 } 00:04:58.531 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.531 09:27:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:58.531 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:58.531 1 heaps totaling size 810.000000 MiB 00:04:58.531 size: 810.000000 MiB heap id: 0 00:04:58.531 end heaps---------- 00:04:58.531 9 mempools totaling size 595.772034 MiB 00:04:58.531 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:58.531 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:58.531 size: 92.545471 MiB name: bdev_io_57918 00:04:58.531 size: 50.003479 MiB name: msgpool_57918 00:04:58.531 size: 36.509338 MiB name: fsdev_io_57918 00:04:58.531 size: 21.763794 MiB name: PDU_Pool 00:04:58.531 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:58.531 size: 4.133484 MiB name: evtpool_57918 00:04:58.531 size: 0.026123 MiB name: Session_Pool 00:04:58.531 end mempools------- 00:04:58.531 6 memzones totaling size 4.142822 MiB 00:04:58.531 size: 1.000366 MiB name: RG_ring_0_57918 00:04:58.531 size: 1.000366 MiB name: RG_ring_1_57918 00:04:58.531 size: 1.000366 MiB name: RG_ring_4_57918 00:04:58.531 size: 1.000366 MiB name: RG_ring_5_57918 00:04:58.531 size: 0.125366 MiB name: RG_ring_2_57918 00:04:58.532 size: 0.015991 MiB name: RG_ring_3_57918 00:04:58.532 end memzones------- 00:04:58.532 09:27:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:58.532 heap id: 0 total size: 810.000000 MiB number of busy elements: 315 number of free elements: 15 00:04:58.532 list of free elements. size: 10.812866 MiB 00:04:58.532 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:58.532 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:58.532 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:58.532 element at address: 0x200000400000 with size: 0.993958 MiB 00:04:58.532 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:58.532 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:58.532 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:58.532 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:58.532 element at address: 0x20001a600000 with size: 0.567322 MiB 00:04:58.532 element at address: 0x20000a600000 with size: 0.488892 MiB 00:04:58.532 element at address: 0x200000c00000 with size: 0.487000 MiB 00:04:58.532 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:58.532 element at address: 0x200003e00000 with size: 0.480286 MiB 00:04:58.532 element at address: 0x200027a00000 with size: 0.395752 MiB 00:04:58.532 element at address: 0x200000800000 with size: 0.351746 MiB 00:04:58.532 list of standard malloc elements. size: 199.268250 MiB 00:04:58.532 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:58.532 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:58.532 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:58.532 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:58.532 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:58.532 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:58.532 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:58.532 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:58.532 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:58.532 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fe740 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fe800 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fe8c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fe980 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fea40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004feb00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004febc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fec80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fed40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fee00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004feec0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004fef80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff040 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff100 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff1c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff280 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff340 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff400 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff4c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff580 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff640 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff700 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff7c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff880 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ff940 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ffcc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000085a0c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000085a2c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000085e580 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087e840 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087e900 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087e9c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087ea80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087eb40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087ec00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087ecc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087ed80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087ee40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087ef00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087efc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f080 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f140 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f200 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f2c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f380 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f440 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f500 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7cac0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7cb80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7cc40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7cd00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7cdc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7ce80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7cf40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d000 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d0c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d180 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d240 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d300 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d3c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d480 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d540 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d600 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d6c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d780 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d840 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d900 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7d9c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7da80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7db40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7dc00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7dcc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7dd80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7de40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7df00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7dfc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e080 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e140 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e200 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e2c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e380 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e440 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e500 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e5c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e680 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e740 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e800 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e8c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7e980 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7ea40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7eb00 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7ebc0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7ec80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7af40 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b000 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b0c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b180 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b240 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b300 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b3c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b480 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b540 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:58.532 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d280 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d340 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d400 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d4c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d580 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d640 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d700 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d7c0 with size: 0.000183 MiB 00:04:58.532 element at address: 0x20000a67d880 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20000a67d940 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6913c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691480 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691540 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691600 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6916c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691780 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691840 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691900 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6919c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691a80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691b40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691c00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691cc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691d80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691e40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691f00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a691fc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692080 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692140 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692200 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6922c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692380 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692440 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692500 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6925c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692680 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692740 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692800 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6928c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692980 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692a40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692b00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692bc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692c80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692d40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692e00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692ec0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a692f80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693040 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693100 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6931c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693280 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693340 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693400 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6934c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693580 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693640 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693700 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6937c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693880 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693940 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693a00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693ac0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693b80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693c40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693d00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693dc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693e80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a693f40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694000 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6940c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694180 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694240 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694300 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6943c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694480 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694540 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694600 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6946c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694780 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694840 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694900 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6949c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694a80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694b40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694c00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694cc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694d80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694e40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694f00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a694fc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a695080 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a695140 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a695200 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a6952c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:58.533 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a65500 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a655c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c1c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c3c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c480 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c540 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c600 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c6c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c780 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c840 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c900 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6c9c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6ca80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6cb40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6cc00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6ccc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6cd80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6ce40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6cf00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6cfc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d080 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d140 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d200 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d2c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d380 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d440 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d500 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d5c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d680 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d740 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d800 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d8c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6d980 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6da40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6db00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6dbc0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6dc80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6dd40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6de00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6dec0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6df80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e040 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e100 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e1c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e280 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e340 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e400 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e4c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e580 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e640 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e700 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e7c0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e880 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6e940 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6ea00 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6eac0 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6eb80 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6ec40 with size: 0.000183 MiB 00:04:58.533 element at address: 0x200027a6ed00 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6edc0 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6ee80 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6ef40 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f000 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f0c0 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f180 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f240 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f300 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f3c0 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f480 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f540 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f600 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f6c0 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f780 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f840 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f900 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6f9c0 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6fa80 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6fb40 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6fc00 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6fcc0 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6fd80 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:58.534 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:58.534 list of memzone associated elements. size: 599.918884 MiB 00:04:58.534 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:58.534 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:58.534 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:58.534 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:58.534 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:58.534 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_57918_0 00:04:58.534 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:58.534 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57918_0 00:04:58.534 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:58.534 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57918_0 00:04:58.534 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:58.534 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:58.534 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:58.534 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:58.534 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:58.534 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57918_0 00:04:58.534 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:58.534 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57918 00:04:58.534 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:58.534 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57918 00:04:58.534 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:58.534 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:58.534 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:58.534 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:58.534 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:58.534 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:58.534 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:58.534 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:58.534 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:58.534 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57918 00:04:58.534 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:58.534 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57918 00:04:58.534 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:58.534 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57918 00:04:58.534 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:58.534 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57918 00:04:58.534 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:58.534 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57918 00:04:58.534 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:58.534 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57918 00:04:58.534 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:58.534 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:58.534 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:58.534 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:58.534 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:58.534 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:58.534 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:58.534 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57918 00:04:58.534 element at address: 0x20000085e640 with size: 0.125488 MiB 00:04:58.534 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57918 00:04:58.534 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:58.534 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:58.534 element at address: 0x200027a65680 with size: 0.023743 MiB 00:04:58.534 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:58.534 element at address: 0x20000085a380 with size: 0.016113 MiB 00:04:58.534 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57918 00:04:58.534 element at address: 0x200027a6b7c0 with size: 0.002441 MiB 00:04:58.534 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:58.534 element at address: 0x2000004ffb80 with size: 0.000305 MiB 00:04:58.534 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57918 00:04:58.534 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:58.534 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57918 00:04:58.534 element at address: 0x20000085a180 with size: 0.000305 MiB 00:04:58.534 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57918 00:04:58.534 element at address: 0x200027a6c280 with size: 0.000305 MiB 00:04:58.534 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:58.534 09:27:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:58.534 09:27:44 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57918 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 57918 ']' 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 57918 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57918 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:58.534 killing process with pid 57918 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57918' 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 57918 00:04:58.534 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 57918 00:04:58.794 00:04:58.794 real 0m1.561s 00:04:58.794 user 0m1.799s 00:04:58.794 sys 0m0.307s 00:04:58.794 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.794 09:27:44 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:58.794 ************************************ 00:04:58.794 END TEST dpdk_mem_utility 00:04:58.794 ************************************ 00:04:58.794 09:27:44 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.794 09:27:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.794 09:27:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.794 09:27:44 -- common/autotest_common.sh@10 -- # set +x 00:04:58.794 ************************************ 00:04:58.794 START TEST event 00:04:58.794 ************************************ 00:04:58.794 09:27:44 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:58.794 * Looking for test storage... 00:04:58.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.053 09:27:44 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.053 09:27:44 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.053 09:27:44 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.053 09:27:44 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.053 09:27:44 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.053 09:27:44 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.053 09:27:44 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.053 09:27:44 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.053 09:27:44 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.053 09:27:44 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.053 09:27:44 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.053 09:27:44 event -- scripts/common.sh@344 -- # case "$op" in 00:04:59.053 09:27:44 event -- scripts/common.sh@345 -- # : 1 00:04:59.053 09:27:44 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.053 09:27:44 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.053 09:27:44 event -- scripts/common.sh@365 -- # decimal 1 00:04:59.053 09:27:44 event -- scripts/common.sh@353 -- # local d=1 00:04:59.053 09:27:44 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.053 09:27:44 event -- scripts/common.sh@355 -- # echo 1 00:04:59.053 09:27:44 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.053 09:27:44 event -- scripts/common.sh@366 -- # decimal 2 00:04:59.053 09:27:44 event -- scripts/common.sh@353 -- # local d=2 00:04:59.053 09:27:44 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.053 09:27:44 event -- scripts/common.sh@355 -- # echo 2 00:04:59.053 09:27:44 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.053 09:27:44 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.053 09:27:44 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.053 09:27:44 event -- scripts/common.sh@368 -- # return 0 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.053 --rc genhtml_branch_coverage=1 00:04:59.053 --rc genhtml_function_coverage=1 00:04:59.053 --rc genhtml_legend=1 00:04:59.053 --rc geninfo_all_blocks=1 00:04:59.053 --rc geninfo_unexecuted_blocks=1 00:04:59.053 00:04:59.053 ' 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.053 --rc genhtml_branch_coverage=1 00:04:59.053 --rc genhtml_function_coverage=1 00:04:59.053 --rc genhtml_legend=1 00:04:59.053 --rc geninfo_all_blocks=1 00:04:59.053 --rc geninfo_unexecuted_blocks=1 00:04:59.053 00:04:59.053 ' 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.053 --rc genhtml_branch_coverage=1 00:04:59.053 --rc genhtml_function_coverage=1 00:04:59.053 --rc genhtml_legend=1 00:04:59.053 --rc geninfo_all_blocks=1 00:04:59.053 --rc geninfo_unexecuted_blocks=1 00:04:59.053 00:04:59.053 ' 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.053 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.053 --rc genhtml_branch_coverage=1 00:04:59.053 --rc genhtml_function_coverage=1 00:04:59.053 --rc genhtml_legend=1 00:04:59.053 --rc geninfo_all_blocks=1 00:04:59.053 --rc geninfo_unexecuted_blocks=1 00:04:59.053 00:04:59.053 ' 00:04:59.053 09:27:44 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:59.053 09:27:44 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:59.053 09:27:44 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:04:59.053 09:27:44 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.053 09:27:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:59.053 ************************************ 00:04:59.053 START TEST event_perf 00:04:59.053 ************************************ 00:04:59.053 09:27:44 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:59.053 Running I/O for 1 seconds...[2024-11-05 09:27:44.887412] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:04:59.053 [2024-11-05 09:27:44.887606] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58003 ] 00:04:59.312 [2024-11-05 09:27:45.028296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:59.312 [2024-11-05 09:27:45.057907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:59.312 [2024-11-05 09:27:45.058018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:59.312 [2024-11-05 09:27:45.058154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:59.312 [2024-11-05 09:27:45.058157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.247 Running I/O for 1 seconds... 00:05:00.247 lcore 0: 205537 00:05:00.247 lcore 1: 205538 00:05:00.247 lcore 2: 205538 00:05:00.247 lcore 3: 205538 00:05:00.247 done. 00:05:00.247 00:05:00.247 real 0m1.225s 00:05:00.247 user 0m4.064s 00:05:00.247 sys 0m0.040s 00:05:00.247 09:27:46 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:00.247 09:27:46 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:00.247 ************************************ 00:05:00.247 END TEST event_perf 00:05:00.247 ************************************ 00:05:00.247 09:27:46 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:00.247 09:27:46 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:00.247 09:27:46 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:00.247 09:27:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:00.247 ************************************ 00:05:00.247 START TEST event_reactor 00:05:00.247 ************************************ 00:05:00.247 09:27:46 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:00.247 [2024-11-05 09:27:46.165817] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:00.247 [2024-11-05 09:27:46.165936] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58036 ] 00:05:00.506 [2024-11-05 09:27:46.309628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.506 [2024-11-05 09:27:46.339323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.443 test_start 00:05:01.443 oneshot 00:05:01.443 tick 100 00:05:01.443 tick 100 00:05:01.443 tick 250 00:05:01.443 tick 100 00:05:01.443 tick 100 00:05:01.443 tick 250 00:05:01.443 tick 500 00:05:01.443 tick 100 00:05:01.443 tick 100 00:05:01.443 tick 100 00:05:01.443 tick 250 00:05:01.443 tick 100 00:05:01.443 tick 100 00:05:01.443 test_end 00:05:01.443 00:05:01.443 real 0m1.226s 00:05:01.443 user 0m1.089s 00:05:01.443 sys 0m0.033s 00:05:01.443 09:27:47 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:01.443 ************************************ 00:05:01.443 END TEST event_reactor 00:05:01.443 ************************************ 00:05:01.443 09:27:47 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:01.703 09:27:47 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.703 09:27:47 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:01.703 09:27:47 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:01.703 09:27:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.703 ************************************ 00:05:01.703 START TEST event_reactor_perf 00:05:01.703 ************************************ 00:05:01.703 09:27:47 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:01.703 [2024-11-05 09:27:47.445104] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:01.703 [2024-11-05 09:27:47.445187] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:05:01.703 [2024-11-05 09:27:47.579899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.703 [2024-11-05 09:27:47.606829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.080 test_start 00:05:03.080 test_end 00:05:03.080 Performance: 449038 events per second 00:05:03.080 ************************************ 00:05:03.080 END TEST event_reactor_perf 00:05:03.080 ************************************ 00:05:03.080 00:05:03.080 real 0m1.222s 00:05:03.080 user 0m1.082s 00:05:03.080 sys 0m0.034s 00:05:03.080 09:27:48 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.080 09:27:48 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:03.080 09:27:48 event -- event/event.sh@49 -- # uname -s 00:05:03.080 09:27:48 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:03.080 09:27:48 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:03.080 09:27:48 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.080 09:27:48 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.080 09:27:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:03.080 ************************************ 00:05:03.080 START TEST event_scheduler 00:05:03.080 ************************************ 00:05:03.080 09:27:48 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:03.080 * Looking for test storage... 00:05:03.080 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:03.080 09:27:48 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:03.080 09:27:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:03.080 09:27:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:03.080 09:27:48 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.080 09:27:48 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.081 09:27:48 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:03.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.081 --rc genhtml_branch_coverage=1 00:05:03.081 --rc genhtml_function_coverage=1 00:05:03.081 --rc genhtml_legend=1 00:05:03.081 --rc geninfo_all_blocks=1 00:05:03.081 --rc geninfo_unexecuted_blocks=1 00:05:03.081 00:05:03.081 ' 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:03.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.081 --rc genhtml_branch_coverage=1 00:05:03.081 --rc genhtml_function_coverage=1 00:05:03.081 --rc genhtml_legend=1 00:05:03.081 --rc geninfo_all_blocks=1 00:05:03.081 --rc geninfo_unexecuted_blocks=1 00:05:03.081 00:05:03.081 ' 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:03.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.081 --rc genhtml_branch_coverage=1 00:05:03.081 --rc genhtml_function_coverage=1 00:05:03.081 --rc genhtml_legend=1 00:05:03.081 --rc geninfo_all_blocks=1 00:05:03.081 --rc geninfo_unexecuted_blocks=1 00:05:03.081 00:05:03.081 ' 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:03.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.081 --rc genhtml_branch_coverage=1 00:05:03.081 --rc genhtml_function_coverage=1 00:05:03.081 --rc genhtml_legend=1 00:05:03.081 --rc geninfo_all_blocks=1 00:05:03.081 --rc geninfo_unexecuted_blocks=1 00:05:03.081 00:05:03.081 ' 00:05:03.081 09:27:48 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:03.081 09:27:48 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58141 00:05:03.081 09:27:48 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:03.081 09:27:48 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:03.081 09:27:48 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58141 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58141 ']' 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.081 09:27:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.081 [2024-11-05 09:27:48.951923] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:03.081 [2024-11-05 09:27:48.952182] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58141 ] 00:05:03.340 [2024-11-05 09:27:49.104310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:03.340 [2024-11-05 09:27:49.146653] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.340 [2024-11-05 09:27:49.146703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:03.340 [2024-11-05 09:27:49.146827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:03.340 [2024-11-05 09:27:49.146845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:03.340 09:27:49 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:03.340 09:27:49 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:03.340 09:27:49 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:03.340 09:27:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.340 09:27:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.340 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.340 POWER: Cannot set governor of lcore 0 to performance 00:05:03.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.340 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.340 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:03.340 POWER: Cannot set governor of lcore 0 to userspace 00:05:03.340 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:03.340 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:03.340 POWER: Unable to set Power Management Environment for lcore 0 00:05:03.340 [2024-11-05 09:27:49.237797] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:03.340 [2024-11-05 09:27:49.237813] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:03.340 [2024-11-05 09:27:49.238087] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:03.340 [2024-11-05 09:27:49.238205] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:03.340 [2024-11-05 09:27:49.238218] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:03.340 [2024-11-05 09:27:49.238227] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:03.340 09:27:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.340 09:27:49 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:03.340 09:27:49 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.340 09:27:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.340 [2024-11-05 09:27:49.276677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:03.340 [2024-11-05 09:27:49.298465] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:03.600 09:27:49 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:03.600 09:27:49 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.600 09:27:49 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 ************************************ 00:05:03.600 START TEST scheduler_create_thread 00:05:03.600 ************************************ 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 2 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 3 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 4 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 5 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 6 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 7 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 8 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 9 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 10 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:03.600 09:27:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:04.977 09:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:04.977 09:27:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:04.977 09:27:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:04.977 09:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.977 09:27:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.354 ************************************ 00:05:06.354 END TEST scheduler_create_thread 00:05:06.354 ************************************ 00:05:06.354 09:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.354 00:05:06.354 real 0m2.614s 00:05:06.354 user 0m0.018s 00:05:06.354 sys 0m0.007s 00:05:06.354 09:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.354 09:27:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:06.354 09:27:51 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:06.354 09:27:51 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58141 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58141 ']' 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58141 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58141 00:05:06.354 killing process with pid 58141 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58141' 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58141 00:05:06.354 09:27:51 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58141 00:05:06.613 [2024-11-05 09:27:52.405670] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:06.613 00:05:06.613 real 0m3.828s 00:05:06.613 user 0m5.733s 00:05:06.613 sys 0m0.304s 00:05:06.613 ************************************ 00:05:06.613 END TEST event_scheduler 00:05:06.613 ************************************ 00:05:06.613 09:27:52 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.613 09:27:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:06.872 09:27:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:06.872 09:27:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:06.872 09:27:52 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.872 09:27:52 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.872 09:27:52 event -- common/autotest_common.sh@10 -- # set +x 00:05:06.872 ************************************ 00:05:06.872 START TEST app_repeat 00:05:06.872 ************************************ 00:05:06.872 09:27:52 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:06.872 09:27:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:06.872 09:27:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:06.872 09:27:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:06.872 09:27:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:06.872 09:27:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:06.872 09:27:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:06.872 09:27:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:06.872 Process app_repeat pid: 58222 00:05:06.872 spdk_app_start Round 0 00:05:06.873 09:27:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58222 00:05:06.873 09:27:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:06.873 09:27:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58222' 00:05:06.873 09:27:52 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:06.873 09:27:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:06.873 09:27:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:06.873 09:27:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58222 /var/tmp/spdk-nbd.sock 00:05:06.873 09:27:52 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58222 ']' 00:05:06.873 09:27:52 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:06.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:06.873 09:27:52 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:06.873 09:27:52 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:06.873 09:27:52 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:06.873 09:27:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:06.873 [2024-11-05 09:27:52.626436] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:06.873 [2024-11-05 09:27:52.626529] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58222 ] 00:05:06.873 [2024-11-05 09:27:52.772355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.873 [2024-11-05 09:27:52.802430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.873 [2024-11-05 09:27:52.802437] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.873 [2024-11-05 09:27:52.830762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:07.131 09:27:52 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:07.131 09:27:52 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:07.131 09:27:52 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.390 Malloc0 00:05:07.390 09:27:53 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.650 Malloc1 00:05:07.650 09:27:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.650 09:27:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:07.909 /dev/nbd0 00:05:07.909 09:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:07.909 09:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:07.909 09:27:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:07.909 09:27:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:07.909 09:27:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:07.909 09:27:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:07.909 09:27:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:07.910 1+0 records in 00:05:07.910 1+0 records out 00:05:07.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000286268 s, 14.3 MB/s 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:07.910 09:27:53 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:07.910 09:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:07.910 09:27:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.910 09:27:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.169 /dev/nbd1 00:05:08.169 09:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.169 09:27:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:08.169 09:27:53 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.169 1+0 records in 00:05:08.169 1+0 records out 00:05:08.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273525 s, 15.0 MB/s 00:05:08.169 09:27:54 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.169 09:27:54 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:08.169 09:27:54 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.169 09:27:54 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:08.169 09:27:54 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:08.169 09:27:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.169 09:27:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.169 09:27:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.169 09:27:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.169 09:27:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.429 09:27:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.429 { 00:05:08.429 "nbd_device": "/dev/nbd0", 00:05:08.429 "bdev_name": "Malloc0" 00:05:08.429 }, 00:05:08.429 { 00:05:08.429 "nbd_device": "/dev/nbd1", 00:05:08.429 "bdev_name": "Malloc1" 00:05:08.429 } 00:05:08.429 ]' 00:05:08.429 09:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.429 { 00:05:08.429 "nbd_device": "/dev/nbd0", 00:05:08.429 "bdev_name": "Malloc0" 00:05:08.429 }, 00:05:08.429 { 00:05:08.429 "nbd_device": "/dev/nbd1", 00:05:08.429 "bdev_name": "Malloc1" 00:05:08.429 } 00:05:08.429 ]' 00:05:08.429 09:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.429 09:27:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.429 /dev/nbd1' 00:05:08.429 09:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.429 /dev/nbd1' 00:05:08.429 09:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.688 256+0 records in 00:05:08.688 256+0 records out 00:05:08.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00773639 s, 136 MB/s 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.688 256+0 records in 00:05:08.688 256+0 records out 00:05:08.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247268 s, 42.4 MB/s 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.688 256+0 records in 00:05:08.688 256+0 records out 00:05:08.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250174 s, 41.9 MB/s 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.688 09:27:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.947 09:27:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.206 09:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.464 09:27:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.464 09:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.464 09:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.723 09:27:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.723 09:27:55 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.982 09:27:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.982 [2024-11-05 09:27:55.822783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.982 [2024-11-05 09:27:55.849537] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.982 [2024-11-05 09:27:55.849570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.982 [2024-11-05 09:27:55.876800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:09.982 [2024-11-05 09:27:55.876914] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.982 [2024-11-05 09:27:55.876928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.268 spdk_app_start Round 1 00:05:13.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.268 09:27:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:13.268 09:27:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:13.268 09:27:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58222 /var/tmp/spdk-nbd.sock 00:05:13.268 09:27:58 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58222 ']' 00:05:13.268 09:27:58 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.268 09:27:58 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:13.268 09:27:58 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.268 09:27:58 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:13.268 09:27:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.268 09:27:59 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:13.268 09:27:59 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:13.268 09:27:59 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.527 Malloc0 00:05:13.527 09:27:59 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:13.786 Malloc1 00:05:13.786 09:27:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.786 09:27:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:14.045 /dev/nbd0 00:05:14.045 09:27:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:14.045 09:27:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.045 1+0 records in 00:05:14.045 1+0 records out 00:05:14.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243862 s, 16.8 MB/s 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:14.045 09:27:59 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:14.045 09:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.045 09:27:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.045 09:27:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:14.303 /dev/nbd1 00:05:14.303 09:28:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:14.303 09:28:00 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:14.303 1+0 records in 00:05:14.303 1+0 records out 00:05:14.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311804 s, 13.1 MB/s 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:14.303 09:28:00 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:14.304 09:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:14.304 09:28:00 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:14.304 09:28:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.304 09:28:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.304 09:28:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:14.564 { 00:05:14.564 "nbd_device": "/dev/nbd0", 00:05:14.564 "bdev_name": "Malloc0" 00:05:14.564 }, 00:05:14.564 { 00:05:14.564 "nbd_device": "/dev/nbd1", 00:05:14.564 "bdev_name": "Malloc1" 00:05:14.564 } 00:05:14.564 ]' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:14.564 { 00:05:14.564 "nbd_device": "/dev/nbd0", 00:05:14.564 "bdev_name": "Malloc0" 00:05:14.564 }, 00:05:14.564 { 00:05:14.564 "nbd_device": "/dev/nbd1", 00:05:14.564 "bdev_name": "Malloc1" 00:05:14.564 } 00:05:14.564 ]' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:14.564 /dev/nbd1' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:14.564 /dev/nbd1' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:14.564 256+0 records in 00:05:14.564 256+0 records out 00:05:14.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0084768 s, 124 MB/s 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:14.564 256+0 records in 00:05:14.564 256+0 records out 00:05:14.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237403 s, 44.2 MB/s 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:14.564 256+0 records in 00:05:14.564 256+0 records out 00:05:14.564 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.027189 s, 38.6 MB/s 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:14.564 09:28:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.565 09:28:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:14.840 09:28:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:15.135 09:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:15.403 09:28:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:15.403 09:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:15.403 09:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:15.662 09:28:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:15.662 09:28:01 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:15.921 09:28:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:15.921 [2024-11-05 09:28:01.778604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.921 [2024-11-05 09:28:01.805397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.921 [2024-11-05 09:28:01.805408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.921 [2024-11-05 09:28:01.836058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:15.921 [2024-11-05 09:28:01.836170] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.921 [2024-11-05 09:28:01.836213] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:19.213 spdk_app_start Round 2 00:05:19.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.213 09:28:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.213 09:28:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:19.213 09:28:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58222 /var/tmp/spdk-nbd.sock 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58222 ']' 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.213 09:28:04 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:19.213 09:28:04 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.472 Malloc0 00:05:19.472 09:28:05 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.731 Malloc1 00:05:19.731 09:28:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.731 09:28:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.991 /dev/nbd0 00:05:19.991 09:28:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.991 09:28:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.991 1+0 records in 00:05:19.991 1+0 records out 00:05:19.991 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000269615 s, 15.2 MB/s 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:19.991 09:28:05 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:19.991 09:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.991 09:28:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.991 09:28:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.250 /dev/nbd1 00:05:20.250 09:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.250 09:28:06 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.250 1+0 records in 00:05:20.250 1+0 records out 00:05:20.250 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255598 s, 16.0 MB/s 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:20.250 09:28:06 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:20.250 09:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.250 09:28:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.250 09:28:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.250 09:28:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.250 09:28:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.510 { 00:05:20.510 "nbd_device": "/dev/nbd0", 00:05:20.510 "bdev_name": "Malloc0" 00:05:20.510 }, 00:05:20.510 { 00:05:20.510 "nbd_device": "/dev/nbd1", 00:05:20.510 "bdev_name": "Malloc1" 00:05:20.510 } 00:05:20.510 ]' 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.510 { 00:05:20.510 "nbd_device": "/dev/nbd0", 00:05:20.510 "bdev_name": "Malloc0" 00:05:20.510 }, 00:05:20.510 { 00:05:20.510 "nbd_device": "/dev/nbd1", 00:05:20.510 "bdev_name": "Malloc1" 00:05:20.510 } 00:05:20.510 ]' 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.510 /dev/nbd1' 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.510 /dev/nbd1' 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.510 256+0 records in 00:05:20.510 256+0 records out 00:05:20.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00974825 s, 108 MB/s 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.510 09:28:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.770 256+0 records in 00:05:20.770 256+0 records out 00:05:20.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0240754 s, 43.6 MB/s 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.770 256+0 records in 00:05:20.770 256+0 records out 00:05:20.770 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.025216 s, 41.6 MB/s 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.770 09:28:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:21.030 09:28:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:21.289 09:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.548 09:28:07 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.548 09:28:07 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.807 09:28:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.807 [2024-11-05 09:28:07.766890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:22.066 [2024-11-05 09:28:07.796861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.066 [2024-11-05 09:28:07.796861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.066 [2024-11-05 09:28:07.825932] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.067 [2024-11-05 09:28:07.826051] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:22.067 [2024-11-05 09:28:07.826064] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:25.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.355 09:28:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58222 /var/tmp/spdk-nbd.sock 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58222 ']' 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:25.355 09:28:10 event.app_repeat -- event/event.sh@39 -- # killprocess 58222 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58222 ']' 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58222 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58222 00:05:25.355 killing process with pid 58222 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58222' 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58222 00:05:25.355 09:28:10 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58222 00:05:25.355 spdk_app_start is called in Round 0. 00:05:25.355 Shutdown signal received, stop current app iteration 00:05:25.355 Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 reinitialization... 00:05:25.355 spdk_app_start is called in Round 1. 00:05:25.355 Shutdown signal received, stop current app iteration 00:05:25.355 Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 reinitialization... 00:05:25.355 spdk_app_start is called in Round 2. 00:05:25.355 Shutdown signal received, stop current app iteration 00:05:25.355 Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 reinitialization... 00:05:25.355 spdk_app_start is called in Round 3. 00:05:25.355 Shutdown signal received, stop current app iteration 00:05:25.355 ************************************ 00:05:25.355 END TEST app_repeat 00:05:25.355 ************************************ 00:05:25.355 09:28:11 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:25.355 09:28:11 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:25.355 00:05:25.355 real 0m18.514s 00:05:25.355 user 0m42.603s 00:05:25.355 sys 0m2.547s 00:05:25.355 09:28:11 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.355 09:28:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:25.355 09:28:11 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:25.355 09:28:11 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.355 09:28:11 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.355 09:28:11 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.355 09:28:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:25.355 ************************************ 00:05:25.355 START TEST cpu_locks 00:05:25.355 ************************************ 00:05:25.355 09:28:11 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:25.355 * Looking for test storage... 00:05:25.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:25.355 09:28:11 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.355 09:28:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.355 09:28:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.614 09:28:11 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.614 09:28:11 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:25.614 09:28:11 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.614 09:28:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.614 --rc genhtml_branch_coverage=1 00:05:25.614 --rc genhtml_function_coverage=1 00:05:25.614 --rc genhtml_legend=1 00:05:25.614 --rc geninfo_all_blocks=1 00:05:25.614 --rc geninfo_unexecuted_blocks=1 00:05:25.614 00:05:25.614 ' 00:05:25.614 09:28:11 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.614 --rc genhtml_branch_coverage=1 00:05:25.614 --rc genhtml_function_coverage=1 00:05:25.614 --rc genhtml_legend=1 00:05:25.615 --rc geninfo_all_blocks=1 00:05:25.615 --rc geninfo_unexecuted_blocks=1 00:05:25.615 00:05:25.615 ' 00:05:25.615 09:28:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.615 --rc genhtml_branch_coverage=1 00:05:25.615 --rc genhtml_function_coverage=1 00:05:25.615 --rc genhtml_legend=1 00:05:25.615 --rc geninfo_all_blocks=1 00:05:25.615 --rc geninfo_unexecuted_blocks=1 00:05:25.615 00:05:25.615 ' 00:05:25.615 09:28:11 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.615 --rc genhtml_branch_coverage=1 00:05:25.615 --rc genhtml_function_coverage=1 00:05:25.615 --rc genhtml_legend=1 00:05:25.615 --rc geninfo_all_blocks=1 00:05:25.615 --rc geninfo_unexecuted_blocks=1 00:05:25.615 00:05:25.615 ' 00:05:25.615 09:28:11 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:25.615 09:28:11 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:25.615 09:28:11 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:25.615 09:28:11 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:25.615 09:28:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.615 09:28:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.615 09:28:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.615 ************************************ 00:05:25.615 START TEST default_locks 00:05:25.615 ************************************ 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58655 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58655 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58655 ']' 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.615 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.615 [2024-11-05 09:28:11.414688] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:25.615 [2024-11-05 09:28:11.415485] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58655 ] 00:05:25.615 [2024-11-05 09:28:11.565777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.874 [2024-11-05 09:28:11.599185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.874 [2024-11-05 09:28:11.637699] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.874 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:25.874 09:28:11 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:25.874 09:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58655 00:05:25.874 09:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58655 00:05:25.874 09:28:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:26.132 09:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58655 00:05:26.132 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58655 ']' 00:05:26.132 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58655 00:05:26.132 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58655 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.392 killing process with pid 58655 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58655' 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58655 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58655 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58655 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58655 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58655 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58655 ']' 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.392 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58655) - No such process 00:05:26.392 ERROR: process (pid: 58655) is no longer running 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.392 00:05:26.392 real 0m1.004s 00:05:26.392 user 0m1.066s 00:05:26.392 sys 0m0.374s 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.392 ************************************ 00:05:26.392 END TEST default_locks 00:05:26.392 ************************************ 00:05:26.392 09:28:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.652 09:28:12 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:26.652 09:28:12 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.652 09:28:12 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.652 09:28:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.652 ************************************ 00:05:26.652 START TEST default_locks_via_rpc 00:05:26.652 ************************************ 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58694 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58694 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58694 ']' 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.652 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.652 [2024-11-05 09:28:12.452868] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:26.652 [2024-11-05 09:28:12.452970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58694 ] 00:05:26.652 [2024-11-05 09:28:12.593849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.911 [2024-11-05 09:28:12.628093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.911 [2024-11-05 09:28:12.668689] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58694 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58694 00:05:26.911 09:28:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58694 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58694 ']' 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58694 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58694 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:27.480 killing process with pid 58694 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58694' 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58694 00:05:27.480 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58694 00:05:27.739 00:05:27.739 real 0m1.072s 00:05:27.739 user 0m1.175s 00:05:27.739 sys 0m0.417s 00:05:27.739 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:27.739 09:28:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 ************************************ 00:05:27.739 END TEST default_locks_via_rpc 00:05:27.739 ************************************ 00:05:27.739 09:28:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:27.739 09:28:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:27.739 09:28:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:27.739 09:28:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 ************************************ 00:05:27.739 START TEST non_locking_app_on_locked_coremask 00:05:27.739 ************************************ 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58738 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58738 /var/tmp/spdk.sock 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58738 ']' 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.739 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.739 [2024-11-05 09:28:13.590750] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:27.739 [2024-11-05 09:28:13.590895] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58738 ] 00:05:27.998 [2024-11-05 09:28:13.740311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.998 [2024-11-05 09:28:13.771648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.998 [2024-11-05 09:28:13.811488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58746 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58746 /var/tmp/spdk2.sock 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58746 ']' 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:27.998 09:28:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.257 [2024-11-05 09:28:14.006702] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:28.257 [2024-11-05 09:28:14.006820] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58746 ] 00:05:28.257 [2024-11-05 09:28:14.168807] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:28.257 [2024-11-05 09:28:14.168882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.516 [2024-11-05 09:28:14.232707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.516 [2024-11-05 09:28:14.312321] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:29.084 09:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.084 09:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:29.084 09:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58738 00:05:29.084 09:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58738 00:05:29.085 09:28:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58738 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58738 ']' 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58738 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58738 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:30.022 killing process with pid 58738 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58738' 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58738 00:05:30.022 09:28:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58738 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58746 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58746 ']' 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58746 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58746 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58746' 00:05:30.591 killing process with pid 58746 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58746 00:05:30.591 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58746 00:05:30.851 00:05:30.851 real 0m3.067s 00:05:30.851 user 0m3.593s 00:05:30.851 sys 0m0.899s 00:05:30.851 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.851 09:28:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.851 ************************************ 00:05:30.851 END TEST non_locking_app_on_locked_coremask 00:05:30.851 ************************************ 00:05:30.851 09:28:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:30.851 09:28:16 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.851 09:28:16 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.851 09:28:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.852 ************************************ 00:05:30.852 START TEST locking_app_on_unlocked_coremask 00:05:30.852 ************************************ 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58808 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58808 /var/tmp/spdk.sock 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58808 ']' 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.852 09:28:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.852 [2024-11-05 09:28:16.696505] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:30.852 [2024-11-05 09:28:16.696590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58808 ] 00:05:31.111 [2024-11-05 09:28:16.837680] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.111 [2024-11-05 09:28:16.837733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.111 [2024-11-05 09:28:16.866105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.111 [2024-11-05 09:28:16.901748] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58816 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58816 /var/tmp/spdk2.sock 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58816 ']' 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.111 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:31.112 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.112 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:31.112 09:28:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.371 [2024-11-05 09:28:17.080384] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:31.371 [2024-11-05 09:28:17.080500] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58816 ] 00:05:31.371 [2024-11-05 09:28:17.236482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.371 [2024-11-05 09:28:17.292201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.631 [2024-11-05 09:28:17.363940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:32.200 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:32.200 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:32.200 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58816 00:05:32.200 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58816 00:05:32.200 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.138 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58808 00:05:33.138 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58808 ']' 00:05:33.138 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58808 00:05:33.138 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:33.138 09:28:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:33.138 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58808 00:05:33.138 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:33.138 killing process with pid 58808 00:05:33.138 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:33.138 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58808' 00:05:33.138 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58808 00:05:33.138 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58808 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58816 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58816 ']' 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 58816 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58816 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:33.707 killing process with pid 58816 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58816' 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 58816 00:05:33.707 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 58816 00:05:33.966 00:05:33.966 real 0m3.125s 00:05:33.966 user 0m3.704s 00:05:33.966 sys 0m0.913s 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.966 ************************************ 00:05:33.966 END TEST locking_app_on_unlocked_coremask 00:05:33.966 ************************************ 00:05:33.966 09:28:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:33.966 09:28:19 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:33.966 09:28:19 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.966 09:28:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:33.966 ************************************ 00:05:33.966 START TEST locking_app_on_locked_coremask 00:05:33.966 ************************************ 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=58878 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 58878 /var/tmp/spdk.sock 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58878 ']' 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.966 09:28:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:33.966 [2024-11-05 09:28:19.891766] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:33.966 [2024-11-05 09:28:19.891919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58878 ] 00:05:34.225 [2024-11-05 09:28:20.037654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.225 [2024-11-05 09:28:20.066614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.225 [2024-11-05 09:28:20.105133] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.160 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=58894 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 58894 /var/tmp/spdk2.sock 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58894 /var/tmp/spdk2.sock 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58894 /var/tmp/spdk2.sock 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58894 ']' 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.161 09:28:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.161 [2024-11-05 09:28:20.919149] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:35.161 [2024-11-05 09:28:20.919243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58894 ] 00:05:35.161 [2024-11-05 09:28:21.077229] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 58878 has claimed it. 00:05:35.161 [2024-11-05 09:28:21.077297] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:35.728 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58894) - No such process 00:05:35.728 ERROR: process (pid: 58894) is no longer running 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 58878 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58878 00:05:35.728 09:28:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 58878 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58878 ']' 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58878 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58878 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:36.297 killing process with pid 58878 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58878' 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58878 00:05:36.297 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58878 00:05:36.557 00:05:36.557 real 0m2.474s 00:05:36.557 user 0m3.039s 00:05:36.557 sys 0m0.541s 00:05:36.557 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.557 09:28:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.557 ************************************ 00:05:36.557 END TEST locking_app_on_locked_coremask 00:05:36.557 ************************************ 00:05:36.557 09:28:22 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:36.557 09:28:22 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.557 09:28:22 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.557 09:28:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.557 ************************************ 00:05:36.557 START TEST locking_overlapped_coremask 00:05:36.557 ************************************ 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=58939 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 58939 /var/tmp/spdk.sock 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58939 ']' 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.557 09:28:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:36.557 [2024-11-05 09:28:22.421948] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:36.557 [2024-11-05 09:28:22.422613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58939 ] 00:05:36.817 [2024-11-05 09:28:22.578977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:36.817 [2024-11-05 09:28:22.619706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.817 [2024-11-05 09:28:22.619901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.817 [2024-11-05 09:28:22.620127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.817 [2024-11-05 09:28:22.665677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=58957 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 58957 /var/tmp/spdk2.sock 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58957 /var/tmp/spdk2.sock 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 58957 /var/tmp/spdk2.sock 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 58957 ']' 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:37.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:37.755 09:28:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.755 [2024-11-05 09:28:23.440521] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:37.755 [2024-11-05 09:28:23.440631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:05:37.755 [2024-11-05 09:28:23.598568] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58939 has claimed it. 00:05:37.755 [2024-11-05 09:28:23.598655] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:38.413 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58957) - No such process 00:05:38.413 ERROR: process (pid: 58957) is no longer running 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 58939 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 58939 ']' 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 58939 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58939 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:38.413 killing process with pid 58939 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58939' 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 58939 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 58939 00:05:38.413 00:05:38.413 real 0m2.014s 00:05:38.413 user 0m5.781s 00:05:38.413 sys 0m0.315s 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.413 09:28:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.413 ************************************ 00:05:38.413 END TEST locking_overlapped_coremask 00:05:38.413 ************************************ 00:05:38.673 09:28:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:38.673 09:28:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.673 09:28:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.673 09:28:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.673 ************************************ 00:05:38.673 START TEST locking_overlapped_coremask_via_rpc 00:05:38.673 ************************************ 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=58997 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 58997 /var/tmp/spdk.sock 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58997 ']' 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.673 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.673 [2024-11-05 09:28:24.472255] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:38.673 [2024-11-05 09:28:24.472379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58997 ] 00:05:38.673 [2024-11-05 09:28:24.609276] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.673 [2024-11-05 09:28:24.609325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.932 [2024-11-05 09:28:24.641959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.932 [2024-11-05 09:28:24.642091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.932 [2024-11-05 09:28:24.642094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.932 [2024-11-05 09:28:24.680024] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59008 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59008 /var/tmp/spdk2.sock 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59008 ']' 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.932 09:28:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.932 [2024-11-05 09:28:24.852411] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:38.932 [2024-11-05 09:28:24.852487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59008 ] 00:05:39.192 [2024-11-05 09:28:25.009247] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:39.192 [2024-11-05 09:28:25.009302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:39.192 [2024-11-05 09:28:25.074235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:39.192 [2024-11-05 09:28:25.078002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.192 [2024-11-05 09:28:25.078017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:39.451 [2024-11-05 09:28:25.161342] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.021 [2024-11-05 09:28:25.879012] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 58997 has claimed it. 00:05:40.021 request: 00:05:40.021 { 00:05:40.021 "method": "framework_enable_cpumask_locks", 00:05:40.021 "req_id": 1 00:05:40.021 } 00:05:40.021 Got JSON-RPC error response 00:05:40.021 response: 00:05:40.021 { 00:05:40.021 "code": -32603, 00:05:40.021 "message": "Failed to claim CPU core: 2" 00:05:40.021 } 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 58997 /var/tmp/spdk.sock 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58997 ']' 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.021 09:28:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59008 /var/tmp/spdk2.sock 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59008 ']' 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:40.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:40.281 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.540 00:05:40.540 real 0m2.021s 00:05:40.540 user 0m1.217s 00:05:40.540 sys 0m0.153s 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:40.540 09:28:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.540 ************************************ 00:05:40.540 END TEST locking_overlapped_coremask_via_rpc 00:05:40.540 ************************************ 00:05:40.540 09:28:26 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:40.540 09:28:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58997 ]] 00:05:40.540 09:28:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58997 00:05:40.540 09:28:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58997 ']' 00:05:40.540 09:28:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58997 00:05:40.540 09:28:26 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:40.540 09:28:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.540 09:28:26 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58997 00:05:40.800 killing process with pid 58997 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58997' 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 58997 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 58997 00:05:40.800 09:28:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59008 ]] 00:05:40.800 09:28:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59008 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59008 ']' 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59008 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.800 09:28:26 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59008 00:05:41.060 killing process with pid 59008 00:05:41.060 09:28:26 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:41.060 09:28:26 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:41.060 09:28:26 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59008' 00:05:41.060 09:28:26 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59008 00:05:41.060 09:28:26 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59008 00:05:41.060 09:28:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.319 09:28:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:41.319 09:28:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 58997 ]] 00:05:41.319 09:28:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 58997 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 58997 ']' 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 58997 00:05:41.319 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (58997) - No such process 00:05:41.319 Process with pid 58997 is not found 00:05:41.319 Process with pid 59008 is not found 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 58997 is not found' 00:05:41.319 09:28:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59008 ]] 00:05:41.319 09:28:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59008 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59008 ']' 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59008 00:05:41.319 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59008) - No such process 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59008 is not found' 00:05:41.319 09:28:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:41.319 00:05:41.319 real 0m15.872s 00:05:41.319 user 0m29.875s 00:05:41.319 sys 0m4.259s 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.319 09:28:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.319 ************************************ 00:05:41.319 END TEST cpu_locks 00:05:41.319 ************************************ 00:05:41.319 ************************************ 00:05:41.319 END TEST event 00:05:41.319 ************************************ 00:05:41.319 00:05:41.319 real 0m42.392s 00:05:41.319 user 1m24.666s 00:05:41.319 sys 0m7.476s 00:05:41.319 09:28:27 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:41.319 09:28:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:41.319 09:28:27 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.319 09:28:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:41.319 09:28:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.319 09:28:27 -- common/autotest_common.sh@10 -- # set +x 00:05:41.320 ************************************ 00:05:41.320 START TEST thread 00:05:41.320 ************************************ 00:05:41.320 09:28:27 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:41.320 * Looking for test storage... 00:05:41.320 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:41.320 09:28:27 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:41.320 09:28:27 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:05:41.320 09:28:27 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:41.579 09:28:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.579 09:28:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.579 09:28:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.579 09:28:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.579 09:28:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.579 09:28:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.579 09:28:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.579 09:28:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.579 09:28:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.579 09:28:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.579 09:28:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.579 09:28:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:41.579 09:28:27 thread -- scripts/common.sh@345 -- # : 1 00:05:41.579 09:28:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.579 09:28:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.579 09:28:27 thread -- scripts/common.sh@365 -- # decimal 1 00:05:41.579 09:28:27 thread -- scripts/common.sh@353 -- # local d=1 00:05:41.579 09:28:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.579 09:28:27 thread -- scripts/common.sh@355 -- # echo 1 00:05:41.579 09:28:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.579 09:28:27 thread -- scripts/common.sh@366 -- # decimal 2 00:05:41.579 09:28:27 thread -- scripts/common.sh@353 -- # local d=2 00:05:41.579 09:28:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.579 09:28:27 thread -- scripts/common.sh@355 -- # echo 2 00:05:41.579 09:28:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.579 09:28:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.579 09:28:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.579 09:28:27 thread -- scripts/common.sh@368 -- # return 0 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.579 --rc genhtml_branch_coverage=1 00:05:41.579 --rc genhtml_function_coverage=1 00:05:41.579 --rc genhtml_legend=1 00:05:41.579 --rc geninfo_all_blocks=1 00:05:41.579 --rc geninfo_unexecuted_blocks=1 00:05:41.579 00:05:41.579 ' 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.579 --rc genhtml_branch_coverage=1 00:05:41.579 --rc genhtml_function_coverage=1 00:05:41.579 --rc genhtml_legend=1 00:05:41.579 --rc geninfo_all_blocks=1 00:05:41.579 --rc geninfo_unexecuted_blocks=1 00:05:41.579 00:05:41.579 ' 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.579 --rc genhtml_branch_coverage=1 00:05:41.579 --rc genhtml_function_coverage=1 00:05:41.579 --rc genhtml_legend=1 00:05:41.579 --rc geninfo_all_blocks=1 00:05:41.579 --rc geninfo_unexecuted_blocks=1 00:05:41.579 00:05:41.579 ' 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:41.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.579 --rc genhtml_branch_coverage=1 00:05:41.579 --rc genhtml_function_coverage=1 00:05:41.579 --rc genhtml_legend=1 00:05:41.579 --rc geninfo_all_blocks=1 00:05:41.579 --rc geninfo_unexecuted_blocks=1 00:05:41.579 00:05:41.579 ' 00:05:41.579 09:28:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:41.579 09:28:27 thread -- common/autotest_common.sh@10 -- # set +x 00:05:41.579 ************************************ 00:05:41.579 START TEST thread_poller_perf 00:05:41.579 ************************************ 00:05:41.579 09:28:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:41.579 [2024-11-05 09:28:27.338689] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:41.579 [2024-11-05 09:28:27.338998] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59138 ] 00:05:41.579 [2024-11-05 09:28:27.485048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.579 [2024-11-05 09:28:27.511772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.579 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:42.959 [2024-11-05T09:28:28.917Z] ====================================== 00:05:42.959 [2024-11-05T09:28:28.917Z] busy:2206824338 (cyc) 00:05:42.959 [2024-11-05T09:28:28.917Z] total_run_count: 366000 00:05:42.959 [2024-11-05T09:28:28.917Z] tsc_hz: 2200000000 (cyc) 00:05:42.959 [2024-11-05T09:28:28.917Z] ====================================== 00:05:42.959 [2024-11-05T09:28:28.917Z] poller_cost: 6029 (cyc), 2740 (nsec) 00:05:42.959 00:05:42.959 ************************************ 00:05:42.959 END TEST thread_poller_perf 00:05:42.959 ************************************ 00:05:42.959 real 0m1.234s 00:05:42.959 user 0m1.092s 00:05:42.959 sys 0m0.035s 00:05:42.959 09:28:28 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:42.959 09:28:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:42.959 09:28:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.959 09:28:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:42.959 09:28:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:42.959 09:28:28 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.959 ************************************ 00:05:42.959 START TEST thread_poller_perf 00:05:42.959 ************************************ 00:05:42.959 09:28:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:42.959 [2024-11-05 09:28:28.626426] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:42.959 [2024-11-05 09:28:28.626686] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59168 ] 00:05:42.959 [2024-11-05 09:28:28.776460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.959 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:42.959 [2024-11-05 09:28:28.805251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.912 [2024-11-05T09:28:29.870Z] ====================================== 00:05:43.912 [2024-11-05T09:28:29.870Z] busy:2202223516 (cyc) 00:05:43.912 [2024-11-05T09:28:29.870Z] total_run_count: 4790000 00:05:43.912 [2024-11-05T09:28:29.870Z] tsc_hz: 2200000000 (cyc) 00:05:43.912 [2024-11-05T09:28:29.870Z] ====================================== 00:05:43.912 [2024-11-05T09:28:29.870Z] poller_cost: 459 (cyc), 208 (nsec) 00:05:43.912 00:05:43.912 real 0m1.234s 00:05:43.912 user 0m1.091s 00:05:43.912 sys 0m0.036s 00:05:43.912 ************************************ 00:05:43.912 END TEST thread_poller_perf 00:05:43.912 ************************************ 00:05:43.913 09:28:29 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.913 09:28:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 09:28:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:44.177 ************************************ 00:05:44.177 END TEST thread 00:05:44.177 ************************************ 00:05:44.177 00:05:44.177 real 0m2.760s 00:05:44.177 user 0m2.351s 00:05:44.177 sys 0m0.194s 00:05:44.177 09:28:29 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:44.177 09:28:29 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 09:28:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:44.177 09:28:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:44.177 09:28:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:44.177 09:28:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:44.177 09:28:29 -- common/autotest_common.sh@10 -- # set +x 00:05:44.177 ************************************ 00:05:44.177 START TEST app_cmdline 00:05:44.177 ************************************ 00:05:44.177 09:28:29 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:44.177 * Looking for test storage... 00:05:44.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:44.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.177 09:28:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:44.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.177 --rc genhtml_branch_coverage=1 00:05:44.177 --rc genhtml_function_coverage=1 00:05:44.177 --rc genhtml_legend=1 00:05:44.177 --rc geninfo_all_blocks=1 00:05:44.177 --rc geninfo_unexecuted_blocks=1 00:05:44.177 00:05:44.177 ' 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:44.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.177 --rc genhtml_branch_coverage=1 00:05:44.177 --rc genhtml_function_coverage=1 00:05:44.177 --rc genhtml_legend=1 00:05:44.177 --rc geninfo_all_blocks=1 00:05:44.177 --rc geninfo_unexecuted_blocks=1 00:05:44.177 00:05:44.177 ' 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:44.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.177 --rc genhtml_branch_coverage=1 00:05:44.177 --rc genhtml_function_coverage=1 00:05:44.177 --rc genhtml_legend=1 00:05:44.177 --rc geninfo_all_blocks=1 00:05:44.177 --rc geninfo_unexecuted_blocks=1 00:05:44.177 00:05:44.177 ' 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:44.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.177 --rc genhtml_branch_coverage=1 00:05:44.177 --rc genhtml_function_coverage=1 00:05:44.177 --rc genhtml_legend=1 00:05:44.177 --rc geninfo_all_blocks=1 00:05:44.177 --rc geninfo_unexecuted_blocks=1 00:05:44.177 00:05:44.177 ' 00:05:44.177 09:28:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:44.177 09:28:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59251 00:05:44.177 09:28:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59251 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59251 ']' 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.177 09:28:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.177 09:28:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.436 [2024-11-05 09:28:30.152799] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:44.436 [2024-11-05 09:28:30.153162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59251 ] 00:05:44.436 [2024-11-05 09:28:30.299535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.436 [2024-11-05 09:28:30.333114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.436 [2024-11-05 09:28:30.373367] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:44.695 09:28:30 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.695 09:28:30 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:05:44.695 09:28:30 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:44.955 { 00:05:44.955 "version": "SPDK v25.01-pre git sha1 6b98809f9", 00:05:44.955 "fields": { 00:05:44.955 "major": 25, 00:05:44.955 "minor": 1, 00:05:44.955 "patch": 0, 00:05:44.955 "suffix": "-pre", 00:05:44.955 "commit": "6b98809f9" 00:05:44.955 } 00:05:44.955 } 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:44.955 09:28:30 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:44.955 09:28:30 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:45.215 request: 00:05:45.215 { 00:05:45.215 "method": "env_dpdk_get_mem_stats", 00:05:45.215 "req_id": 1 00:05:45.215 } 00:05:45.215 Got JSON-RPC error response 00:05:45.215 response: 00:05:45.215 { 00:05:45.215 "code": -32601, 00:05:45.215 "message": "Method not found" 00:05:45.215 } 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:45.215 09:28:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59251 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59251 ']' 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59251 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:45.215 09:28:31 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59251 00:05:45.474 killing process with pid 59251 00:05:45.474 09:28:31 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:45.474 09:28:31 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:45.474 09:28:31 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59251' 00:05:45.474 09:28:31 app_cmdline -- common/autotest_common.sh@971 -- # kill 59251 00:05:45.474 09:28:31 app_cmdline -- common/autotest_common.sh@976 -- # wait 59251 00:05:45.474 00:05:45.474 real 0m1.474s 00:05:45.474 user 0m2.013s 00:05:45.474 sys 0m0.347s 00:05:45.474 09:28:31 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.474 09:28:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.474 ************************************ 00:05:45.474 END TEST app_cmdline 00:05:45.474 ************************************ 00:05:45.734 09:28:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:45.734 09:28:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.734 09:28:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.734 09:28:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.734 ************************************ 00:05:45.734 START TEST version 00:05:45.734 ************************************ 00:05:45.734 09:28:31 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:45.734 * Looking for test storage... 00:05:45.734 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:45.734 09:28:31 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.734 09:28:31 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.734 09:28:31 version -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.734 09:28:31 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.734 09:28:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.734 09:28:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.734 09:28:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.734 09:28:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.734 09:28:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.734 09:28:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.734 09:28:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.735 09:28:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.735 09:28:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.735 09:28:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.735 09:28:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.735 09:28:31 version -- scripts/common.sh@344 -- # case "$op" in 00:05:45.735 09:28:31 version -- scripts/common.sh@345 -- # : 1 00:05:45.735 09:28:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.735 09:28:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.735 09:28:31 version -- scripts/common.sh@365 -- # decimal 1 00:05:45.735 09:28:31 version -- scripts/common.sh@353 -- # local d=1 00:05:45.735 09:28:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.735 09:28:31 version -- scripts/common.sh@355 -- # echo 1 00:05:45.735 09:28:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.735 09:28:31 version -- scripts/common.sh@366 -- # decimal 2 00:05:45.735 09:28:31 version -- scripts/common.sh@353 -- # local d=2 00:05:45.735 09:28:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.735 09:28:31 version -- scripts/common.sh@355 -- # echo 2 00:05:45.735 09:28:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.735 09:28:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.735 09:28:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.735 09:28:31 version -- scripts/common.sh@368 -- # return 0 00:05:45.735 09:28:31 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.735 09:28:31 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.735 --rc genhtml_branch_coverage=1 00:05:45.735 --rc genhtml_function_coverage=1 00:05:45.735 --rc genhtml_legend=1 00:05:45.735 --rc geninfo_all_blocks=1 00:05:45.735 --rc geninfo_unexecuted_blocks=1 00:05:45.735 00:05:45.735 ' 00:05:45.735 09:28:31 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.735 --rc genhtml_branch_coverage=1 00:05:45.735 --rc genhtml_function_coverage=1 00:05:45.735 --rc genhtml_legend=1 00:05:45.735 --rc geninfo_all_blocks=1 00:05:45.735 --rc geninfo_unexecuted_blocks=1 00:05:45.735 00:05:45.735 ' 00:05:45.735 09:28:31 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.735 --rc genhtml_branch_coverage=1 00:05:45.735 --rc genhtml_function_coverage=1 00:05:45.735 --rc genhtml_legend=1 00:05:45.735 --rc geninfo_all_blocks=1 00:05:45.735 --rc geninfo_unexecuted_blocks=1 00:05:45.735 00:05:45.735 ' 00:05:45.735 09:28:31 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.735 --rc genhtml_branch_coverage=1 00:05:45.735 --rc genhtml_function_coverage=1 00:05:45.735 --rc genhtml_legend=1 00:05:45.735 --rc geninfo_all_blocks=1 00:05:45.735 --rc geninfo_unexecuted_blocks=1 00:05:45.735 00:05:45.735 ' 00:05:45.735 09:28:31 version -- app/version.sh@17 -- # get_header_version major 00:05:45.735 09:28:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # cut -f2 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.735 09:28:31 version -- app/version.sh@17 -- # major=25 00:05:45.735 09:28:31 version -- app/version.sh@18 -- # get_header_version minor 00:05:45.735 09:28:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # cut -f2 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.735 09:28:31 version -- app/version.sh@18 -- # minor=1 00:05:45.735 09:28:31 version -- app/version.sh@19 -- # get_header_version patch 00:05:45.735 09:28:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # cut -f2 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.735 09:28:31 version -- app/version.sh@19 -- # patch=0 00:05:45.735 09:28:31 version -- app/version.sh@20 -- # get_header_version suffix 00:05:45.735 09:28:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # cut -f2 00:05:45.735 09:28:31 version -- app/version.sh@14 -- # tr -d '"' 00:05:45.735 09:28:31 version -- app/version.sh@20 -- # suffix=-pre 00:05:45.735 09:28:31 version -- app/version.sh@22 -- # version=25.1 00:05:45.735 09:28:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:45.735 09:28:31 version -- app/version.sh@28 -- # version=25.1rc0 00:05:45.735 09:28:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:45.735 09:28:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:45.735 09:28:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:45.735 09:28:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:45.735 00:05:45.735 real 0m0.245s 00:05:45.735 user 0m0.150s 00:05:45.735 sys 0m0.131s 00:05:45.995 ************************************ 00:05:45.995 END TEST version 00:05:45.995 ************************************ 00:05:45.995 09:28:31 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.995 09:28:31 version -- common/autotest_common.sh@10 -- # set +x 00:05:45.995 09:28:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:45.995 09:28:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:45.995 09:28:31 -- spdk/autotest.sh@194 -- # uname -s 00:05:45.995 09:28:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:45.995 09:28:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:45.995 09:28:31 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:05:45.995 09:28:31 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:05:45.995 09:28:31 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:45.995 09:28:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.995 09:28:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.995 09:28:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.995 ************************************ 00:05:45.995 START TEST spdk_dd 00:05:45.995 ************************************ 00:05:45.995 09:28:31 spdk_dd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:05:45.995 * Looking for test storage... 00:05:45.995 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:45.995 09:28:31 spdk_dd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.995 09:28:31 spdk_dd -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.995 09:28:31 spdk_dd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.995 09:28:31 spdk_dd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.995 09:28:31 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@345 -- # : 1 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@368 -- # return 0 00:05:45.996 09:28:31 spdk_dd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.996 09:28:31 spdk_dd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.996 --rc genhtml_branch_coverage=1 00:05:45.996 --rc genhtml_function_coverage=1 00:05:45.996 --rc genhtml_legend=1 00:05:45.996 --rc geninfo_all_blocks=1 00:05:45.996 --rc geninfo_unexecuted_blocks=1 00:05:45.996 00:05:45.996 ' 00:05:45.996 09:28:31 spdk_dd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.996 --rc genhtml_branch_coverage=1 00:05:45.996 --rc genhtml_function_coverage=1 00:05:45.996 --rc genhtml_legend=1 00:05:45.996 --rc geninfo_all_blocks=1 00:05:45.996 --rc geninfo_unexecuted_blocks=1 00:05:45.996 00:05:45.996 ' 00:05:45.996 09:28:31 spdk_dd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.996 --rc genhtml_branch_coverage=1 00:05:45.996 --rc genhtml_function_coverage=1 00:05:45.996 --rc genhtml_legend=1 00:05:45.996 --rc geninfo_all_blocks=1 00:05:45.996 --rc geninfo_unexecuted_blocks=1 00:05:45.996 00:05:45.996 ' 00:05:45.996 09:28:31 spdk_dd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.996 --rc genhtml_branch_coverage=1 00:05:45.996 --rc genhtml_function_coverage=1 00:05:45.996 --rc genhtml_legend=1 00:05:45.996 --rc geninfo_all_blocks=1 00:05:45.996 --rc geninfo_unexecuted_blocks=1 00:05:45.996 00:05:45.996 ' 00:05:45.996 09:28:31 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:45.996 09:28:31 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:45.996 09:28:31 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.996 09:28:31 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.996 09:28:31 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.996 09:28:31 spdk_dd -- paths/export.sh@5 -- # export PATH 00:05:45.996 09:28:31 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:45.996 09:28:31 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:46.567 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:46.567 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.567 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:46.567 09:28:32 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:05:46.567 09:28:32 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@233 -- # local class 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@235 -- # local progif 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@236 -- # class=01 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@18 -- # local i 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@27 -- # return 0 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:05:46.567 09:28:32 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:46.567 09:28:32 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:05:46.567 09:28:32 spdk_dd -- dd/common.sh@139 -- # local lib 00:05:46.567 09:28:32 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:05:46.567 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.568 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:05:46.569 * spdk_dd linked to liburing 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:05:46.569 09:28:32 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:05:46.569 09:28:32 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:05:46.570 09:28:32 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:05:46.570 09:28:32 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:05:46.570 09:28:32 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:05:46.570 09:28:32 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:05:46.570 09:28:32 spdk_dd -- dd/common.sh@153 -- # return 0 00:05:46.570 09:28:32 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:05:46.570 09:28:32 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:46.570 09:28:32 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:46.570 09:28:32 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:46.570 09:28:32 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:05:46.570 ************************************ 00:05:46.570 START TEST spdk_dd_basic_rw 00:05:46.570 ************************************ 00:05:46.570 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:05:46.570 * Looking for test storage... 00:05:46.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:05:46.570 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:46.570 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:46.570 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lcov --version 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:46.829 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:46.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.830 --rc genhtml_branch_coverage=1 00:05:46.830 --rc genhtml_function_coverage=1 00:05:46.830 --rc genhtml_legend=1 00:05:46.830 --rc geninfo_all_blocks=1 00:05:46.830 --rc geninfo_unexecuted_blocks=1 00:05:46.830 00:05:46.830 ' 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:46.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.830 --rc genhtml_branch_coverage=1 00:05:46.830 --rc genhtml_function_coverage=1 00:05:46.830 --rc genhtml_legend=1 00:05:46.830 --rc geninfo_all_blocks=1 00:05:46.830 --rc geninfo_unexecuted_blocks=1 00:05:46.830 00:05:46.830 ' 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:46.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.830 --rc genhtml_branch_coverage=1 00:05:46.830 --rc genhtml_function_coverage=1 00:05:46.830 --rc genhtml_legend=1 00:05:46.830 --rc geninfo_all_blocks=1 00:05:46.830 --rc geninfo_unexecuted_blocks=1 00:05:46.830 00:05:46.830 ' 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:46.830 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:46.830 --rc genhtml_branch_coverage=1 00:05:46.830 --rc genhtml_function_coverage=1 00:05:46.830 --rc genhtml_legend=1 00:05:46.830 --rc geninfo_all_blocks=1 00:05:46.830 --rc geninfo_unexecuted_blocks=1 00:05:46.830 00:05:46.830 ' 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:05:46.830 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:05:47.092 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:05:47.093 ************************************ 00:05:47.093 START TEST dd_bs_lt_native_bs 00:05:47.093 ************************************ 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1127 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:05:47.093 09:28:32 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:05:47.093 { 00:05:47.093 "subsystems": [ 00:05:47.093 { 00:05:47.093 "subsystem": "bdev", 00:05:47.093 "config": [ 00:05:47.093 { 00:05:47.093 "params": { 00:05:47.093 "trtype": "pcie", 00:05:47.093 "traddr": "0000:00:10.0", 00:05:47.093 "name": "Nvme0" 00:05:47.093 }, 00:05:47.093 "method": "bdev_nvme_attach_controller" 00:05:47.093 }, 00:05:47.093 { 00:05:47.093 "method": "bdev_wait_for_examine" 00:05:47.093 } 00:05:47.093 ] 00:05:47.093 } 00:05:47.093 ] 00:05:47.093 } 00:05:47.093 [2024-11-05 09:28:32.885337] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:47.093 [2024-11-05 09:28:32.885426] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59589 ] 00:05:47.093 [2024-11-05 09:28:33.039154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.352 [2024-11-05 09:28:33.078165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.352 [2024-11-05 09:28:33.111255] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.352 [2024-11-05 09:28:33.206629] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:05:47.352 [2024-11-05 09:28:33.206698] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.352 [2024-11-05 09:28:33.283454] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:05:47.611 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.612 00:05:47.612 real 0m0.520s 00:05:47.612 user 0m0.349s 00:05:47.612 sys 0m0.123s 00:05:47.612 ************************************ 00:05:47.612 END TEST dd_bs_lt_native_bs 00:05:47.612 ************************************ 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:47.612 ************************************ 00:05:47.612 START TEST dd_rw 00:05:47.612 ************************************ 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1127 -- # basic_rw 4096 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:47.612 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:05:48.180 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:48.180 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.180 09:28:33 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.180 [2024-11-05 09:28:34.010002] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:48.180 [2024-11-05 09:28:34.010260] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59626 ] 00:05:48.180 { 00:05:48.180 "subsystems": [ 00:05:48.180 { 00:05:48.180 "subsystem": "bdev", 00:05:48.180 "config": [ 00:05:48.180 { 00:05:48.180 "params": { 00:05:48.180 "trtype": "pcie", 00:05:48.180 "traddr": "0000:00:10.0", 00:05:48.180 "name": "Nvme0" 00:05:48.180 }, 00:05:48.180 "method": "bdev_nvme_attach_controller" 00:05:48.180 }, 00:05:48.180 { 00:05:48.180 "method": "bdev_wait_for_examine" 00:05:48.180 } 00:05:48.180 ] 00:05:48.180 } 00:05:48.180 ] 00:05:48.180 } 00:05:48.440 [2024-11-05 09:28:34.154718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.440 [2024-11-05 09:28:34.184269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.440 [2024-11-05 09:28:34.212257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.440  [2024-11-05T09:28:34.657Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:48.699 00:05:48.699 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:05:48.699 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:48.699 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.699 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:48.699 [2024-11-05 09:28:34.474449] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:48.699 [2024-11-05 09:28:34.474725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59634 ] 00:05:48.699 { 00:05:48.699 "subsystems": [ 00:05:48.699 { 00:05:48.699 "subsystem": "bdev", 00:05:48.699 "config": [ 00:05:48.699 { 00:05:48.699 "params": { 00:05:48.699 "trtype": "pcie", 00:05:48.699 "traddr": "0000:00:10.0", 00:05:48.699 "name": "Nvme0" 00:05:48.699 }, 00:05:48.699 "method": "bdev_nvme_attach_controller" 00:05:48.699 }, 00:05:48.699 { 00:05:48.699 "method": "bdev_wait_for_examine" 00:05:48.699 } 00:05:48.699 ] 00:05:48.699 } 00:05:48.699 ] 00:05:48.699 } 00:05:48.699 [2024-11-05 09:28:34.617429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.699 [2024-11-05 09:28:34.644346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.958 [2024-11-05 09:28:34.671957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:48.958  [2024-11-05T09:28:34.916Z] Copying: 60/60 [kB] (average 19 MBps) 00:05:48.958 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:48.958 09:28:34 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:49.217 [2024-11-05 09:28:34.941672] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:49.217 [2024-11-05 09:28:34.941941] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:05:49.217 { 00:05:49.217 "subsystems": [ 00:05:49.217 { 00:05:49.217 "subsystem": "bdev", 00:05:49.217 "config": [ 00:05:49.217 { 00:05:49.217 "params": { 00:05:49.217 "trtype": "pcie", 00:05:49.217 "traddr": "0000:00:10.0", 00:05:49.217 "name": "Nvme0" 00:05:49.217 }, 00:05:49.217 "method": "bdev_nvme_attach_controller" 00:05:49.217 }, 00:05:49.217 { 00:05:49.217 "method": "bdev_wait_for_examine" 00:05:49.217 } 00:05:49.217 ] 00:05:49.217 } 00:05:49.217 ] 00:05:49.217 } 00:05:49.217 [2024-11-05 09:28:35.085714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.217 [2024-11-05 09:28:35.114155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.217 [2024-11-05 09:28:35.141257] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:49.477  [2024-11-05T09:28:35.435Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:49.477 00:05:49.477 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:49.477 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:05:49.477 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:05:49.477 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:05:49.477 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:05:49.477 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:49.477 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.045 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:05:50.045 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:50.045 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.045 09:28:35 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.045 [2024-11-05 09:28:35.916974] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:50.045 [2024-11-05 09:28:35.917059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59668 ] 00:05:50.045 { 00:05:50.045 "subsystems": [ 00:05:50.045 { 00:05:50.045 "subsystem": "bdev", 00:05:50.045 "config": [ 00:05:50.045 { 00:05:50.045 "params": { 00:05:50.045 "trtype": "pcie", 00:05:50.045 "traddr": "0000:00:10.0", 00:05:50.045 "name": "Nvme0" 00:05:50.045 }, 00:05:50.045 "method": "bdev_nvme_attach_controller" 00:05:50.045 }, 00:05:50.045 { 00:05:50.045 "method": "bdev_wait_for_examine" 00:05:50.045 } 00:05:50.045 ] 00:05:50.045 } 00:05:50.045 ] 00:05:50.045 } 00:05:50.305 [2024-11-05 09:28:36.061649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.305 [2024-11-05 09:28:36.092443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.305 [2024-11-05 09:28:36.121338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.305  [2024-11-05T09:28:36.522Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:50.564 00:05:50.564 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:05:50.564 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:50.564 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:50.564 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:50.564 [2024-11-05 09:28:36.385459] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:50.564 [2024-11-05 09:28:36.385582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59682 ] 00:05:50.564 { 00:05:50.564 "subsystems": [ 00:05:50.564 { 00:05:50.564 "subsystem": "bdev", 00:05:50.564 "config": [ 00:05:50.564 { 00:05:50.564 "params": { 00:05:50.564 "trtype": "pcie", 00:05:50.564 "traddr": "0000:00:10.0", 00:05:50.564 "name": "Nvme0" 00:05:50.564 }, 00:05:50.564 "method": "bdev_nvme_attach_controller" 00:05:50.564 }, 00:05:50.564 { 00:05:50.565 "method": "bdev_wait_for_examine" 00:05:50.565 } 00:05:50.565 ] 00:05:50.565 } 00:05:50.565 ] 00:05:50.565 } 00:05:50.824 [2024-11-05 09:28:36.531603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.824 [2024-11-05 09:28:36.568004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.824 [2024-11-05 09:28:36.595724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:50.824  [2024-11-05T09:28:37.041Z] Copying: 60/60 [kB] (average 58 MBps) 00:05:51.083 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.084 09:28:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.084 [2024-11-05 09:28:36.861263] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:51.084 [2024-11-05 09:28:36.861935] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59697 ] 00:05:51.084 { 00:05:51.084 "subsystems": [ 00:05:51.084 { 00:05:51.084 "subsystem": "bdev", 00:05:51.084 "config": [ 00:05:51.084 { 00:05:51.084 "params": { 00:05:51.084 "trtype": "pcie", 00:05:51.084 "traddr": "0000:00:10.0", 00:05:51.084 "name": "Nvme0" 00:05:51.084 }, 00:05:51.084 "method": "bdev_nvme_attach_controller" 00:05:51.084 }, 00:05:51.084 { 00:05:51.084 "method": "bdev_wait_for_examine" 00:05:51.084 } 00:05:51.084 ] 00:05:51.084 } 00:05:51.084 ] 00:05:51.084 } 00:05:51.084 [2024-11-05 09:28:37.009124] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.084 [2024-11-05 09:28:37.035677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.343 [2024-11-05 09:28:37.063372] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.343  [2024-11-05T09:28:37.301Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:51.343 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:51.343 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.911 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:05:51.911 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:51.911 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:51.911 09:28:37 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:51.911 [2024-11-05 09:28:37.812615] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:51.911 [2024-11-05 09:28:37.812719] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59716 ] 00:05:51.911 { 00:05:51.911 "subsystems": [ 00:05:51.911 { 00:05:51.911 "subsystem": "bdev", 00:05:51.911 "config": [ 00:05:51.911 { 00:05:51.911 "params": { 00:05:51.911 "trtype": "pcie", 00:05:51.911 "traddr": "0000:00:10.0", 00:05:51.911 "name": "Nvme0" 00:05:51.911 }, 00:05:51.911 "method": "bdev_nvme_attach_controller" 00:05:51.911 }, 00:05:51.911 { 00:05:51.911 "method": "bdev_wait_for_examine" 00:05:51.911 } 00:05:51.911 ] 00:05:51.911 } 00:05:51.911 ] 00:05:51.911 } 00:05:52.171 [2024-11-05 09:28:37.958791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.171 [2024-11-05 09:28:37.990554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.171 [2024-11-05 09:28:38.018617] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.171  [2024-11-05T09:28:38.388Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:52.430 00:05:52.430 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:05:52.430 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:52.430 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.430 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.430 [2024-11-05 09:28:38.288619] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:52.430 [2024-11-05 09:28:38.289191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59730 ] 00:05:52.430 { 00:05:52.430 "subsystems": [ 00:05:52.430 { 00:05:52.430 "subsystem": "bdev", 00:05:52.430 "config": [ 00:05:52.430 { 00:05:52.430 "params": { 00:05:52.430 "trtype": "pcie", 00:05:52.430 "traddr": "0000:00:10.0", 00:05:52.430 "name": "Nvme0" 00:05:52.430 }, 00:05:52.430 "method": "bdev_nvme_attach_controller" 00:05:52.430 }, 00:05:52.430 { 00:05:52.430 "method": "bdev_wait_for_examine" 00:05:52.430 } 00:05:52.430 ] 00:05:52.430 } 00:05:52.430 ] 00:05:52.430 } 00:05:52.689 [2024-11-05 09:28:38.434046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.689 [2024-11-05 09:28:38.460502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.689 [2024-11-05 09:28:38.487125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:52.689  [2024-11-05T09:28:38.906Z] Copying: 56/56 [kB] (average 27 MBps) 00:05:52.948 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:52.948 09:28:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:52.948 [2024-11-05 09:28:38.754879] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:52.948 [2024-11-05 09:28:38.754968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59741 ] 00:05:52.948 { 00:05:52.948 "subsystems": [ 00:05:52.948 { 00:05:52.948 "subsystem": "bdev", 00:05:52.948 "config": [ 00:05:52.948 { 00:05:52.948 "params": { 00:05:52.948 "trtype": "pcie", 00:05:52.948 "traddr": "0000:00:10.0", 00:05:52.948 "name": "Nvme0" 00:05:52.948 }, 00:05:52.948 "method": "bdev_nvme_attach_controller" 00:05:52.948 }, 00:05:52.948 { 00:05:52.948 "method": "bdev_wait_for_examine" 00:05:52.948 } 00:05:52.948 ] 00:05:52.948 } 00:05:52.948 ] 00:05:52.948 } 00:05:52.948 [2024-11-05 09:28:38.901356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.207 [2024-11-05 09:28:38.929993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.207 [2024-11-05 09:28:38.956812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:53.207  [2024-11-05T09:28:39.424Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:53.466 00:05:53.466 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:53.466 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:05:53.466 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:05:53.466 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:05:53.466 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:05:53.466 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:53.466 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.725 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:05:53.725 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:53.725 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:53.725 09:28:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:53.725 [2024-11-05 09:28:39.675947] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:53.725 [2024-11-05 09:28:39.677029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59759 ] 00:05:53.725 { 00:05:53.725 "subsystems": [ 00:05:53.725 { 00:05:53.725 "subsystem": "bdev", 00:05:53.725 "config": [ 00:05:53.725 { 00:05:53.725 "params": { 00:05:53.725 "trtype": "pcie", 00:05:53.725 "traddr": "0000:00:10.0", 00:05:53.725 "name": "Nvme0" 00:05:53.725 }, 00:05:53.725 "method": "bdev_nvme_attach_controller" 00:05:53.725 }, 00:05:53.725 { 00:05:53.725 "method": "bdev_wait_for_examine" 00:05:53.725 } 00:05:53.725 ] 00:05:53.725 } 00:05:53.725 ] 00:05:53.725 } 00:05:53.984 [2024-11-05 09:28:39.829165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.984 [2024-11-05 09:28:39.855987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.984 [2024-11-05 09:28:39.882863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.242  [2024-11-05T09:28:40.201Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:54.243 00:05:54.243 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:05:54.243 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:54.243 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.243 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.243 [2024-11-05 09:28:40.144999] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:54.243 [2024-11-05 09:28:40.145086] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59778 ] 00:05:54.243 { 00:05:54.243 "subsystems": [ 00:05:54.243 { 00:05:54.243 "subsystem": "bdev", 00:05:54.243 "config": [ 00:05:54.243 { 00:05:54.243 "params": { 00:05:54.243 "trtype": "pcie", 00:05:54.243 "traddr": "0000:00:10.0", 00:05:54.243 "name": "Nvme0" 00:05:54.243 }, 00:05:54.243 "method": "bdev_nvme_attach_controller" 00:05:54.243 }, 00:05:54.243 { 00:05:54.243 "method": "bdev_wait_for_examine" 00:05:54.243 } 00:05:54.243 ] 00:05:54.243 } 00:05:54.243 ] 00:05:54.243 } 00:05:54.502 [2024-11-05 09:28:40.288990] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.502 [2024-11-05 09:28:40.315897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.502 [2024-11-05 09:28:40.343262] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:54.502  [2024-11-05T09:28:40.718Z] Copying: 56/56 [kB] (average 54 MBps) 00:05:54.760 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:54.760 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:54.761 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:54.761 09:28:40 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:54.761 { 00:05:54.761 "subsystems": [ 00:05:54.761 { 00:05:54.761 "subsystem": "bdev", 00:05:54.761 "config": [ 00:05:54.761 { 00:05:54.761 "params": { 00:05:54.761 "trtype": "pcie", 00:05:54.761 "traddr": "0000:00:10.0", 00:05:54.761 "name": "Nvme0" 00:05:54.761 }, 00:05:54.761 "method": "bdev_nvme_attach_controller" 00:05:54.761 }, 00:05:54.761 { 00:05:54.761 "method": "bdev_wait_for_examine" 00:05:54.761 } 00:05:54.761 ] 00:05:54.761 } 00:05:54.761 ] 00:05:54.761 } 00:05:54.761 [2024-11-05 09:28:40.620261] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:54.761 [2024-11-05 09:28:40.620348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59788 ] 00:05:55.020 [2024-11-05 09:28:40.768214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.020 [2024-11-05 09:28:40.797140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.020 [2024-11-05 09:28:40.827969] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.020  [2024-11-05T09:28:41.237Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:55.279 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:55.279 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.538 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:05:55.538 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:55.538 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:55.538 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:55.797 { 00:05:55.797 "subsystems": [ 00:05:55.797 { 00:05:55.797 "subsystem": "bdev", 00:05:55.797 "config": [ 00:05:55.797 { 00:05:55.797 "params": { 00:05:55.797 "trtype": "pcie", 00:05:55.797 "traddr": "0000:00:10.0", 00:05:55.797 "name": "Nvme0" 00:05:55.797 }, 00:05:55.797 "method": "bdev_nvme_attach_controller" 00:05:55.797 }, 00:05:55.797 { 00:05:55.797 "method": "bdev_wait_for_examine" 00:05:55.797 } 00:05:55.797 ] 00:05:55.797 } 00:05:55.797 ] 00:05:55.797 } 00:05:55.797 [2024-11-05 09:28:41.525412] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:55.797 [2024-11-05 09:28:41.525531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59807 ] 00:05:55.797 [2024-11-05 09:28:41.673169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.797 [2024-11-05 09:28:41.701335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.797 [2024-11-05 09:28:41.730296] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.056  [2024-11-05T09:28:42.014Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:56.056 00:05:56.056 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:56.056 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:05:56.056 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.056 09:28:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.056 [2024-11-05 09:28:41.998261] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:56.056 [2024-11-05 09:28:41.998364] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:05:56.056 { 00:05:56.056 "subsystems": [ 00:05:56.056 { 00:05:56.056 "subsystem": "bdev", 00:05:56.056 "config": [ 00:05:56.056 { 00:05:56.056 "params": { 00:05:56.056 "trtype": "pcie", 00:05:56.056 "traddr": "0000:00:10.0", 00:05:56.056 "name": "Nvme0" 00:05:56.056 }, 00:05:56.056 "method": "bdev_nvme_attach_controller" 00:05:56.056 }, 00:05:56.056 { 00:05:56.056 "method": "bdev_wait_for_examine" 00:05:56.056 } 00:05:56.056 ] 00:05:56.056 } 00:05:56.056 ] 00:05:56.056 } 00:05:56.316 [2024-11-05 09:28:42.141573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.316 [2024-11-05 09:28:42.168073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.316 [2024-11-05 09:28:42.194898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.575  [2024-11-05T09:28:42.533Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:56.575 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:56.575 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:56.575 [2024-11-05 09:28:42.477598] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:56.575 [2024-11-05 09:28:42.477694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59836 ] 00:05:56.575 { 00:05:56.575 "subsystems": [ 00:05:56.575 { 00:05:56.575 "subsystem": "bdev", 00:05:56.575 "config": [ 00:05:56.575 { 00:05:56.575 "params": { 00:05:56.575 "trtype": "pcie", 00:05:56.575 "traddr": "0000:00:10.0", 00:05:56.575 "name": "Nvme0" 00:05:56.575 }, 00:05:56.575 "method": "bdev_nvme_attach_controller" 00:05:56.575 }, 00:05:56.575 { 00:05:56.575 "method": "bdev_wait_for_examine" 00:05:56.575 } 00:05:56.575 ] 00:05:56.575 } 00:05:56.575 ] 00:05:56.575 } 00:05:56.834 [2024-11-05 09:28:42.617289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.834 [2024-11-05 09:28:42.644491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.834 [2024-11-05 09:28:42.676469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:56.834  [2024-11-05T09:28:43.051Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:05:57.093 00:05:57.093 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:05:57.093 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:05:57.093 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:05:57.093 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:05:57.093 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:05:57.093 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:05:57.093 09:28:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.660 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:05:57.660 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:05:57.660 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.660 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:57.660 [2024-11-05 09:28:43.398412] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:57.660 [2024-11-05 09:28:43.398545] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59855 ] 00:05:57.660 { 00:05:57.660 "subsystems": [ 00:05:57.660 { 00:05:57.660 "subsystem": "bdev", 00:05:57.660 "config": [ 00:05:57.660 { 00:05:57.660 "params": { 00:05:57.660 "trtype": "pcie", 00:05:57.660 "traddr": "0000:00:10.0", 00:05:57.660 "name": "Nvme0" 00:05:57.660 }, 00:05:57.660 "method": "bdev_nvme_attach_controller" 00:05:57.660 }, 00:05:57.660 { 00:05:57.660 "method": "bdev_wait_for_examine" 00:05:57.660 } 00:05:57.660 ] 00:05:57.660 } 00:05:57.660 ] 00:05:57.660 } 00:05:57.660 [2024-11-05 09:28:43.548239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.660 [2024-11-05 09:28:43.575224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.660 [2024-11-05 09:28:43.602303] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:57.920  [2024-11-05T09:28:43.878Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:57.920 00:05:57.920 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:05:57.920 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:05:57.920 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:57.920 09:28:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.179 [2024-11-05 09:28:43.889543] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:58.179 [2024-11-05 09:28:43.889679] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59863 ] 00:05:58.179 { 00:05:58.179 "subsystems": [ 00:05:58.179 { 00:05:58.179 "subsystem": "bdev", 00:05:58.179 "config": [ 00:05:58.179 { 00:05:58.179 "params": { 00:05:58.179 "trtype": "pcie", 00:05:58.179 "traddr": "0000:00:10.0", 00:05:58.179 "name": "Nvme0" 00:05:58.179 }, 00:05:58.179 "method": "bdev_nvme_attach_controller" 00:05:58.179 }, 00:05:58.179 { 00:05:58.179 "method": "bdev_wait_for_examine" 00:05:58.179 } 00:05:58.179 ] 00:05:58.179 } 00:05:58.179 ] 00:05:58.179 } 00:05:58.179 [2024-11-05 09:28:44.035360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.179 [2024-11-05 09:28:44.062862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.179 [2024-11-05 09:28:44.090857] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.438  [2024-11-05T09:28:44.396Z] Copying: 48/48 [kB] (average 46 MBps) 00:05:58.438 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:05:58.438 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.438 [2024-11-05 09:28:44.371985] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:58.438 [2024-11-05 09:28:44.372085] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59884 ] 00:05:58.438 { 00:05:58.438 "subsystems": [ 00:05:58.438 { 00:05:58.438 "subsystem": "bdev", 00:05:58.438 "config": [ 00:05:58.438 { 00:05:58.438 "params": { 00:05:58.438 "trtype": "pcie", 00:05:58.438 "traddr": "0000:00:10.0", 00:05:58.438 "name": "Nvme0" 00:05:58.438 }, 00:05:58.438 "method": "bdev_nvme_attach_controller" 00:05:58.438 }, 00:05:58.438 { 00:05:58.438 "method": "bdev_wait_for_examine" 00:05:58.438 } 00:05:58.438 ] 00:05:58.438 } 00:05:58.438 ] 00:05:58.438 } 00:05:58.697 [2024-11-05 09:28:44.515175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.697 [2024-11-05 09:28:44.541757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.697 [2024-11-05 09:28:44.570805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.957  [2024-11-05T09:28:44.915Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:05:58.957 00:05:58.957 00:05:58.957 real 0m11.385s 00:05:58.957 user 0m8.516s 00:05:58.957 sys 0m3.496s 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.957 ************************************ 00:05:58.957 END TEST dd_rw 00:05:58.957 ************************************ 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:05:58.957 ************************************ 00:05:58.957 START TEST dd_rw_offset 00:05:58.957 ************************************ 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1127 -- # basic_offset 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:58.957 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:05:58.958 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=gk58n23ew3x546jna5f3p21e7bjwgm4mfg4tbvd2hhmjgfmelj7k2vwb8uvhf81pjbm65hh9atj6mh4odp3grnbtpkxjq2gf6ctoeq1hx9swl23xx9743n88yfb8ggy383aux2du4rt8c8m82jeuu7o6hhp9ukepihityvtvz945ubuodqt4bbfhlzq1g3q2lwj2jkm4jr0p18q0kq5o3sy7lidbf9fpexrgsvukeczhfuslchlg38awpokvkjznvjhnw1jag7tpi3qqrsxa8pedifw02sxvq4snrv6zs1xcifbfprb2va6yammavq5pirlf44gg0gjo3pnm45fizzx5miyivy0bflt08nk23003183bavjzk4odpzky3gso7p2m4c6pza2cwd37r1oezjz8oy8o6274c192imnheh93pbgmxoz2zqv1uf1crt68a2z48bgsxn4g3dt8qb90bapfi8fk4name8dc9hwhzh46h8l8rvk23h2vm1n769qo16f87ozwlfiem5eq2fixyh7ywv5v2mcdwodyi7ilkt5zpaj8mtbpd1dgjm9wszfuw2tdvppeoby1drzsgypq5aema5u3v8o3bpiivgym95eno1mmtykjf35szvkup608cq0v8i1awfhfxnumx7h7g44hx69qaj3lkczokm9s2yh3k3h1a9xdbawfs1xk994dd5di29jak31g2k77j28rgelkylizy6g2dqw5dwk4c5ddxk97ff5u3x9hctkrskxif7vspx9b4ert3nnnyp5l5vk9biy1zujg18j1lo56y4inz7kethojcmirb87dvefgrm8oa1ake47ttt6gzpfiv6jxu70juc689mbsm7onq87wiqy0ux6rrwy7xcmyv8bpn2vd0z6ayofnygestda54woeaqa1qv9598o183sh6ykppd8zanxy6969uym0g8d1ioju2hlm77b27vxxp8p7iits9rplnoe3zycqyd8qrn5ho55qv0gx76yzavpcoy8rjrw9ksk4bnoi8rewdmthrvk56zjv0quota85joy2cyi8a6duuwciv6x0493tjz1ir80y07sa72lp5get7ao582ssqo27cauj8atpd8bvyr23kcl78n0jfxnhw52oea4kasf48llljd4wqcbuz4xghd195n0rdfqpkjfpsi09c3i02uh3o3gxt0svekxju7piw4eqtklu9317q3vwb1pfte6d57ces1gq2t8p1k388mez15gpbpc6x2m10099tguow8br6o9tl8zbx7rvr556c18m4wwv6bbhjxbatnzvln6fakvgkox1j66p13d1llv50ywu6kjdkjry0t7l6fxq5jp7c5waexdtbyad5obadmsk812ir1fwxxj46n1k3pifo9l5wn4jhclm3tuurjo0rdufwfpgwjv2vc5sxuyjqw5hwr5valf89zc1xstqt81qyam901vcnq2luefrriw0x8274lm4pi9q9sfg3dgcuj9scd39e1ffeqxsw5jklbk2eloclgdxkgqs6ws1tvfbax8rmd0m6adxmglb9e40oislq20xd3lcaqmrnndz2g47gnf69nwac69l6pxmxl1yerqr6foeiqp691mi89cidpc01a7ejkzjsfc3f4uil78rbl1hvpvrfxsepbufiax97wflftsgm9w2fbtk2fa13o666cb26hygjotngv0vgndsz37ag3l7ae0kxz4grycuzovmfrhneopuln4i9al1sw7auhblbbfu4zatully7xle15m9naqc5bb0yzevysrbb477h45s7dj9bgp6eg6v6ydx4tvruobyh3a2k33pbgvozh187ucyivb4gde4fg0wjgmhmd3ntd484ccx3fy39hwjxdh6e8vatzijvcv72xjl6fst1xbxsgejiew80p3oh789ygm0pce6mlv5ga5c8i92omexhzilfrynbwvp63j8kp1ot5sdbzu1fx3eqcxxmfv8ejzen2fzuspg29eapuairak9nsiu8tramz3qs8hm10drhexprpvhlcgt4gi4e5v27vv6p90uiap7rhogxaxcuhweznmtrvlu4wszdvsysowxp4bz9wwpxq4ji2dc0s3w9edyot6iaivhf0401c04itmr2sir5m2bwci4vw390301ntoyxqh92xyfkm39xy32l3v8dylno2q5g26ddu285bhuo7up9lhrkg5zuxyj9h2kxhlk1va739c4pxoqgvf4rw6957b52lrs4wipun8nrtup0gznu9nafao82sezwlq7e23hbtndhybm819o36zre9o9s0i6ikltc4pcnjjlnl2oanrmnbzghvthb0i4pyeqkfgjsw557efk8bmtnq3f59b076m3djv7dnaamvdjr8vf5hqib25ldfc6j323k7if1vckbqs0ciswcg3pv6uac1thjwtq1t4znfdq1l8d6use602agoo1ziluk6zv5a8ahqul2flwea2m600xmvyi71fd914qomxe6o49yis40jjhn7trhkpagx45t7uen3psnxojii6huwu3l9x2orcs6eifsy3ehtae3zpufv2r2cpgi0i5cwzmo15rwnwylwf3qbjuxg3pjfq6prbo7hswihq8ll14upyfaayjcdegpiboijtukfbscoaj680w4k4ga1yy6v3p7o1hq66rdxi1kzhd4puqf9dkl7c19hwvsncnic3ew6loen3lpiut32m9jm300yjh8tzoag5zbx0eotjae06fdrpb94btt5xk8s3huvh0tylrcmvou3lwd8v9si8bm4mwmhfc9mc17wxzttlv1byzhj7i4c9j8gth7fqole1256sssl1uzga1z7g78sgjbgqxb2tdisgtv79gqqrgleujptw3dplsezwvnpr9idi6f8p74xw9h7npndt0vgdlfui09h0stvhkztlxxp60kqphs28uoygqy4g173dr1o84yjuk6jdx35a5163e6qun6e5nrkb8czktym31kx5p1t5o9q9vpscn15jvfzyt2rkq0no0tnp7fho0ydi76lqttnhawj1b8c5m7x27n4uswep8pd9xj3unfx6z3vbquxzztokaernvsrswmioxklfrbxgbt1c99ba7forcqaryzjgmr9mu1fjqex38p1rjn9oryex7rk344vydatkidd39vyu0yr2eoee01e75mr252ru7wfrem9oajxls6uk16jfxsudwcsvsyttfyi5mbti7egzmc5hjmscu4ap7ahm9b6fc3yk8ddoc3lf73t2iv7y1t01789w9lb9kmugdnbcb84839mw74k2kd9tadqjbb0i0ci08qodmqtqd3l7px9dc3dsskyc6rpjogdqx9tbaz64wkqycv87fso2ppb5fxl1z9n0nex5n9lypiby9wjvj0g9nj8r1l6jhf29sx6kvo1uibvplarlds253ouwxm68rip14ji21zcildix8rsnmetpcqhgwhq9ng4jbios5g5wnaqm3xo3xfmy2rutmojp92h02j7zgj7siez6fwsh6mzm4cj9ja27spzdwf59ocp133pii6le2zhym7wxyuiurfjywb86oe3i0gh0bz0jomubomkp793alut7m4ecqwnl3t6wuvnzx6klxksdxogn2d087qf4m5hdx932736ydj15t69p88w6tdqbk7lvn0lrv7c4kihdmf4nam03d038np5kup0f831gpitr8q9k9s1k2hix42la4zfbmzrftaqi0tijqh0sgeo5u48hrms8a0ke8zbqnvjoibak0u8ai3vyiuzofzjgox3tcfybrqaexy7earh4dcbqsb56tqnyzibxzgh28cdynwiafnsxw3v9o4f0p5aykpt7sy8qldsgayhq77s4eg46e0jncvy3w7dlu49qrizd4namzzkm154zvr0y3czhzcrdvndzg4wme1xn1oj2f6sq8k0575th0zo29gbwxc47icdd46hm2xbgev5glshhwdnvezavkacvjn4t2fnhf8jhowsnqhm0mru4ep3och5xe70jwsmy4xaz66a6xvtf1 00:05:58.958 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:05:58.958 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:05:58.958 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:58.958 09:28:44 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:59.233 [2024-11-05 09:28:44.935289] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:59.233 [2024-11-05 09:28:44.935508] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59909 ] 00:05:59.233 { 00:05:59.233 "subsystems": [ 00:05:59.233 { 00:05:59.233 "subsystem": "bdev", 00:05:59.233 "config": [ 00:05:59.233 { 00:05:59.234 "params": { 00:05:59.234 "trtype": "pcie", 00:05:59.234 "traddr": "0000:00:10.0", 00:05:59.234 "name": "Nvme0" 00:05:59.234 }, 00:05:59.234 "method": "bdev_nvme_attach_controller" 00:05:59.234 }, 00:05:59.234 { 00:05:59.234 "method": "bdev_wait_for_examine" 00:05:59.234 } 00:05:59.234 ] 00:05:59.234 } 00:05:59.234 ] 00:05:59.234 } 00:05:59.234 [2024-11-05 09:28:45.074256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.234 [2024-11-05 09:28:45.107682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.234 [2024-11-05 09:28:45.138375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.517  [2024-11-05T09:28:45.475Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:05:59.517 00:05:59.517 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:05:59.517 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:05:59.517 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:05:59.517 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:05:59.517 [2024-11-05 09:28:45.396007] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:05:59.517 [2024-11-05 09:28:45.396097] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59928 ] 00:05:59.517 { 00:05:59.517 "subsystems": [ 00:05:59.517 { 00:05:59.517 "subsystem": "bdev", 00:05:59.517 "config": [ 00:05:59.517 { 00:05:59.517 "params": { 00:05:59.517 "trtype": "pcie", 00:05:59.517 "traddr": "0000:00:10.0", 00:05:59.517 "name": "Nvme0" 00:05:59.517 }, 00:05:59.517 "method": "bdev_nvme_attach_controller" 00:05:59.517 }, 00:05:59.517 { 00:05:59.517 "method": "bdev_wait_for_examine" 00:05:59.517 } 00:05:59.517 ] 00:05:59.517 } 00:05:59.517 ] 00:05:59.517 } 00:05:59.783 [2024-11-05 09:28:45.541262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.783 [2024-11-05 09:28:45.568654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.783 [2024-11-05 09:28:45.595514] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:59.783  [2024-11-05T09:28:46.001Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:06:00.043 00:06:00.043 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:06:00.043 ************************************ 00:06:00.043 END TEST dd_rw_offset 00:06:00.043 ************************************ 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ gk58n23ew3x546jna5f3p21e7bjwgm4mfg4tbvd2hhmjgfmelj7k2vwb8uvhf81pjbm65hh9atj6mh4odp3grnbtpkxjq2gf6ctoeq1hx9swl23xx9743n88yfb8ggy383aux2du4rt8c8m82jeuu7o6hhp9ukepihityvtvz945ubuodqt4bbfhlzq1g3q2lwj2jkm4jr0p18q0kq5o3sy7lidbf9fpexrgsvukeczhfuslchlg38awpokvkjznvjhnw1jag7tpi3qqrsxa8pedifw02sxvq4snrv6zs1xcifbfprb2va6yammavq5pirlf44gg0gjo3pnm45fizzx5miyivy0bflt08nk23003183bavjzk4odpzky3gso7p2m4c6pza2cwd37r1oezjz8oy8o6274c192imnheh93pbgmxoz2zqv1uf1crt68a2z48bgsxn4g3dt8qb90bapfi8fk4name8dc9hwhzh46h8l8rvk23h2vm1n769qo16f87ozwlfiem5eq2fixyh7ywv5v2mcdwodyi7ilkt5zpaj8mtbpd1dgjm9wszfuw2tdvppeoby1drzsgypq5aema5u3v8o3bpiivgym95eno1mmtykjf35szvkup608cq0v8i1awfhfxnumx7h7g44hx69qaj3lkczokm9s2yh3k3h1a9xdbawfs1xk994dd5di29jak31g2k77j28rgelkylizy6g2dqw5dwk4c5ddxk97ff5u3x9hctkrskxif7vspx9b4ert3nnnyp5l5vk9biy1zujg18j1lo56y4inz7kethojcmirb87dvefgrm8oa1ake47ttt6gzpfiv6jxu70juc689mbsm7onq87wiqy0ux6rrwy7xcmyv8bpn2vd0z6ayofnygestda54woeaqa1qv9598o183sh6ykppd8zanxy6969uym0g8d1ioju2hlm77b27vxxp8p7iits9rplnoe3zycqyd8qrn5ho55qv0gx76yzavpcoy8rjrw9ksk4bnoi8rewdmthrvk56zjv0quota85joy2cyi8a6duuwciv6x0493tjz1ir80y07sa72lp5get7ao582ssqo27cauj8atpd8bvyr23kcl78n0jfxnhw52oea4kasf48llljd4wqcbuz4xghd195n0rdfqpkjfpsi09c3i02uh3o3gxt0svekxju7piw4eqtklu9317q3vwb1pfte6d57ces1gq2t8p1k388mez15gpbpc6x2m10099tguow8br6o9tl8zbx7rvr556c18m4wwv6bbhjxbatnzvln6fakvgkox1j66p13d1llv50ywu6kjdkjry0t7l6fxq5jp7c5waexdtbyad5obadmsk812ir1fwxxj46n1k3pifo9l5wn4jhclm3tuurjo0rdufwfpgwjv2vc5sxuyjqw5hwr5valf89zc1xstqt81qyam901vcnq2luefrriw0x8274lm4pi9q9sfg3dgcuj9scd39e1ffeqxsw5jklbk2eloclgdxkgqs6ws1tvfbax8rmd0m6adxmglb9e40oislq20xd3lcaqmrnndz2g47gnf69nwac69l6pxmxl1yerqr6foeiqp691mi89cidpc01a7ejkzjsfc3f4uil78rbl1hvpvrfxsepbufiax97wflftsgm9w2fbtk2fa13o666cb26hygjotngv0vgndsz37ag3l7ae0kxz4grycuzovmfrhneopuln4i9al1sw7auhblbbfu4zatully7xle15m9naqc5bb0yzevysrbb477h45s7dj9bgp6eg6v6ydx4tvruobyh3a2k33pbgvozh187ucyivb4gde4fg0wjgmhmd3ntd484ccx3fy39hwjxdh6e8vatzijvcv72xjl6fst1xbxsgejiew80p3oh789ygm0pce6mlv5ga5c8i92omexhzilfrynbwvp63j8kp1ot5sdbzu1fx3eqcxxmfv8ejzen2fzuspg29eapuairak9nsiu8tramz3qs8hm10drhexprpvhlcgt4gi4e5v27vv6p90uiap7rhogxaxcuhweznmtrvlu4wszdvsysowxp4bz9wwpxq4ji2dc0s3w9edyot6iaivhf0401c04itmr2sir5m2bwci4vw390301ntoyxqh92xyfkm39xy32l3v8dylno2q5g26ddu285bhuo7up9lhrkg5zuxyj9h2kxhlk1va739c4pxoqgvf4rw6957b52lrs4wipun8nrtup0gznu9nafao82sezwlq7e23hbtndhybm819o36zre9o9s0i6ikltc4pcnjjlnl2oanrmnbzghvthb0i4pyeqkfgjsw557efk8bmtnq3f59b076m3djv7dnaamvdjr8vf5hqib25ldfc6j323k7if1vckbqs0ciswcg3pv6uac1thjwtq1t4znfdq1l8d6use602agoo1ziluk6zv5a8ahqul2flwea2m600xmvyi71fd914qomxe6o49yis40jjhn7trhkpagx45t7uen3psnxojii6huwu3l9x2orcs6eifsy3ehtae3zpufv2r2cpgi0i5cwzmo15rwnwylwf3qbjuxg3pjfq6prbo7hswihq8ll14upyfaayjcdegpiboijtukfbscoaj680w4k4ga1yy6v3p7o1hq66rdxi1kzhd4puqf9dkl7c19hwvsncnic3ew6loen3lpiut32m9jm300yjh8tzoag5zbx0eotjae06fdrpb94btt5xk8s3huvh0tylrcmvou3lwd8v9si8bm4mwmhfc9mc17wxzttlv1byzhj7i4c9j8gth7fqole1256sssl1uzga1z7g78sgjbgqxb2tdisgtv79gqqrgleujptw3dplsezwvnpr9idi6f8p74xw9h7npndt0vgdlfui09h0stvhkztlxxp60kqphs28uoygqy4g173dr1o84yjuk6jdx35a5163e6qun6e5nrkb8czktym31kx5p1t5o9q9vpscn15jvfzyt2rkq0no0tnp7fho0ydi76lqttnhawj1b8c5m7x27n4uswep8pd9xj3unfx6z3vbquxzztokaernvsrswmioxklfrbxgbt1c99ba7forcqaryzjgmr9mu1fjqex38p1rjn9oryex7rk344vydatkidd39vyu0yr2eoee01e75mr252ru7wfrem9oajxls6uk16jfxsudwcsvsyttfyi5mbti7egzmc5hjmscu4ap7ahm9b6fc3yk8ddoc3lf73t2iv7y1t01789w9lb9kmugdnbcb84839mw74k2kd9tadqjbb0i0ci08qodmqtqd3l7px9dc3dsskyc6rpjogdqx9tbaz64wkqycv87fso2ppb5fxl1z9n0nex5n9lypiby9wjvj0g9nj8r1l6jhf29sx6kvo1uibvplarlds253ouwxm68rip14ji21zcildix8rsnmetpcqhgwhq9ng4jbios5g5wnaqm3xo3xfmy2rutmojp92h02j7zgj7siez6fwsh6mzm4cj9ja27spzdwf59ocp133pii6le2zhym7wxyuiurfjywb86oe3i0gh0bz0jomubomkp793alut7m4ecqwnl3t6wuvnzx6klxksdxogn2d087qf4m5hdx932736ydj15t69p88w6tdqbk7lvn0lrv7c4kihdmf4nam03d038np5kup0f831gpitr8q9k9s1k2hix42la4zfbmzrftaqi0tijqh0sgeo5u48hrms8a0ke8zbqnvjoibak0u8ai3vyiuzofzjgox3tcfybrqaexy7earh4dcbqsb56tqnyzibxzgh28cdynwiafnsxw3v9o4f0p5aykpt7sy8qldsgayhq77s4eg46e0jncvy3w7dlu49qrizd4namzzkm154zvr0y3czhzcrdvndzg4wme1xn1oj2f6sq8k0575th0zo29gbwxc47icdd46hm2xbgev5glshhwdnvezavkacvjn4t2fnhf8jhowsnqhm0mru4ep3och5xe70jwsmy4xaz66a6xvtf1 == \g\k\5\8\n\2\3\e\w\3\x\5\4\6\j\n\a\5\f\3\p\2\1\e\7\b\j\w\g\m\4\m\f\g\4\t\b\v\d\2\h\h\m\j\g\f\m\e\l\j\7\k\2\v\w\b\8\u\v\h\f\8\1\p\j\b\m\6\5\h\h\9\a\t\j\6\m\h\4\o\d\p\3\g\r\n\b\t\p\k\x\j\q\2\g\f\6\c\t\o\e\q\1\h\x\9\s\w\l\2\3\x\x\9\7\4\3\n\8\8\y\f\b\8\g\g\y\3\8\3\a\u\x\2\d\u\4\r\t\8\c\8\m\8\2\j\e\u\u\7\o\6\h\h\p\9\u\k\e\p\i\h\i\t\y\v\t\v\z\9\4\5\u\b\u\o\d\q\t\4\b\b\f\h\l\z\q\1\g\3\q\2\l\w\j\2\j\k\m\4\j\r\0\p\1\8\q\0\k\q\5\o\3\s\y\7\l\i\d\b\f\9\f\p\e\x\r\g\s\v\u\k\e\c\z\h\f\u\s\l\c\h\l\g\3\8\a\w\p\o\k\v\k\j\z\n\v\j\h\n\w\1\j\a\g\7\t\p\i\3\q\q\r\s\x\a\8\p\e\d\i\f\w\0\2\s\x\v\q\4\s\n\r\v\6\z\s\1\x\c\i\f\b\f\p\r\b\2\v\a\6\y\a\m\m\a\v\q\5\p\i\r\l\f\4\4\g\g\0\g\j\o\3\p\n\m\4\5\f\i\z\z\x\5\m\i\y\i\v\y\0\b\f\l\t\0\8\n\k\2\3\0\0\3\1\8\3\b\a\v\j\z\k\4\o\d\p\z\k\y\3\g\s\o\7\p\2\m\4\c\6\p\z\a\2\c\w\d\3\7\r\1\o\e\z\j\z\8\o\y\8\o\6\2\7\4\c\1\9\2\i\m\n\h\e\h\9\3\p\b\g\m\x\o\z\2\z\q\v\1\u\f\1\c\r\t\6\8\a\2\z\4\8\b\g\s\x\n\4\g\3\d\t\8\q\b\9\0\b\a\p\f\i\8\f\k\4\n\a\m\e\8\d\c\9\h\w\h\z\h\4\6\h\8\l\8\r\v\k\2\3\h\2\v\m\1\n\7\6\9\q\o\1\6\f\8\7\o\z\w\l\f\i\e\m\5\e\q\2\f\i\x\y\h\7\y\w\v\5\v\2\m\c\d\w\o\d\y\i\7\i\l\k\t\5\z\p\a\j\8\m\t\b\p\d\1\d\g\j\m\9\w\s\z\f\u\w\2\t\d\v\p\p\e\o\b\y\1\d\r\z\s\g\y\p\q\5\a\e\m\a\5\u\3\v\8\o\3\b\p\i\i\v\g\y\m\9\5\e\n\o\1\m\m\t\y\k\j\f\3\5\s\z\v\k\u\p\6\0\8\c\q\0\v\8\i\1\a\w\f\h\f\x\n\u\m\x\7\h\7\g\4\4\h\x\6\9\q\a\j\3\l\k\c\z\o\k\m\9\s\2\y\h\3\k\3\h\1\a\9\x\d\b\a\w\f\s\1\x\k\9\9\4\d\d\5\d\i\2\9\j\a\k\3\1\g\2\k\7\7\j\2\8\r\g\e\l\k\y\l\i\z\y\6\g\2\d\q\w\5\d\w\k\4\c\5\d\d\x\k\9\7\f\f\5\u\3\x\9\h\c\t\k\r\s\k\x\i\f\7\v\s\p\x\9\b\4\e\r\t\3\n\n\n\y\p\5\l\5\v\k\9\b\i\y\1\z\u\j\g\1\8\j\1\l\o\5\6\y\4\i\n\z\7\k\e\t\h\o\j\c\m\i\r\b\8\7\d\v\e\f\g\r\m\8\o\a\1\a\k\e\4\7\t\t\t\6\g\z\p\f\i\v\6\j\x\u\7\0\j\u\c\6\8\9\m\b\s\m\7\o\n\q\8\7\w\i\q\y\0\u\x\6\r\r\w\y\7\x\c\m\y\v\8\b\p\n\2\v\d\0\z\6\a\y\o\f\n\y\g\e\s\t\d\a\5\4\w\o\e\a\q\a\1\q\v\9\5\9\8\o\1\8\3\s\h\6\y\k\p\p\d\8\z\a\n\x\y\6\9\6\9\u\y\m\0\g\8\d\1\i\o\j\u\2\h\l\m\7\7\b\2\7\v\x\x\p\8\p\7\i\i\t\s\9\r\p\l\n\o\e\3\z\y\c\q\y\d\8\q\r\n\5\h\o\5\5\q\v\0\g\x\7\6\y\z\a\v\p\c\o\y\8\r\j\r\w\9\k\s\k\4\b\n\o\i\8\r\e\w\d\m\t\h\r\v\k\5\6\z\j\v\0\q\u\o\t\a\8\5\j\o\y\2\c\y\i\8\a\6\d\u\u\w\c\i\v\6\x\0\4\9\3\t\j\z\1\i\r\8\0\y\0\7\s\a\7\2\l\p\5\g\e\t\7\a\o\5\8\2\s\s\q\o\2\7\c\a\u\j\8\a\t\p\d\8\b\v\y\r\2\3\k\c\l\7\8\n\0\j\f\x\n\h\w\5\2\o\e\a\4\k\a\s\f\4\8\l\l\l\j\d\4\w\q\c\b\u\z\4\x\g\h\d\1\9\5\n\0\r\d\f\q\p\k\j\f\p\s\i\0\9\c\3\i\0\2\u\h\3\o\3\g\x\t\0\s\v\e\k\x\j\u\7\p\i\w\4\e\q\t\k\l\u\9\3\1\7\q\3\v\w\b\1\p\f\t\e\6\d\5\7\c\e\s\1\g\q\2\t\8\p\1\k\3\8\8\m\e\z\1\5\g\p\b\p\c\6\x\2\m\1\0\0\9\9\t\g\u\o\w\8\b\r\6\o\9\t\l\8\z\b\x\7\r\v\r\5\5\6\c\1\8\m\4\w\w\v\6\b\b\h\j\x\b\a\t\n\z\v\l\n\6\f\a\k\v\g\k\o\x\1\j\6\6\p\1\3\d\1\l\l\v\5\0\y\w\u\6\k\j\d\k\j\r\y\0\t\7\l\6\f\x\q\5\j\p\7\c\5\w\a\e\x\d\t\b\y\a\d\5\o\b\a\d\m\s\k\8\1\2\i\r\1\f\w\x\x\j\4\6\n\1\k\3\p\i\f\o\9\l\5\w\n\4\j\h\c\l\m\3\t\u\u\r\j\o\0\r\d\u\f\w\f\p\g\w\j\v\2\v\c\5\s\x\u\y\j\q\w\5\h\w\r\5\v\a\l\f\8\9\z\c\1\x\s\t\q\t\8\1\q\y\a\m\9\0\1\v\c\n\q\2\l\u\e\f\r\r\i\w\0\x\8\2\7\4\l\m\4\p\i\9\q\9\s\f\g\3\d\g\c\u\j\9\s\c\d\3\9\e\1\f\f\e\q\x\s\w\5\j\k\l\b\k\2\e\l\o\c\l\g\d\x\k\g\q\s\6\w\s\1\t\v\f\b\a\x\8\r\m\d\0\m\6\a\d\x\m\g\l\b\9\e\4\0\o\i\s\l\q\2\0\x\d\3\l\c\a\q\m\r\n\n\d\z\2\g\4\7\g\n\f\6\9\n\w\a\c\6\9\l\6\p\x\m\x\l\1\y\e\r\q\r\6\f\o\e\i\q\p\6\9\1\m\i\8\9\c\i\d\p\c\0\1\a\7\e\j\k\z\j\s\f\c\3\f\4\u\i\l\7\8\r\b\l\1\h\v\p\v\r\f\x\s\e\p\b\u\f\i\a\x\9\7\w\f\l\f\t\s\g\m\9\w\2\f\b\t\k\2\f\a\1\3\o\6\6\6\c\b\2\6\h\y\g\j\o\t\n\g\v\0\v\g\n\d\s\z\3\7\a\g\3\l\7\a\e\0\k\x\z\4\g\r\y\c\u\z\o\v\m\f\r\h\n\e\o\p\u\l\n\4\i\9\a\l\1\s\w\7\a\u\h\b\l\b\b\f\u\4\z\a\t\u\l\l\y\7\x\l\e\1\5\m\9\n\a\q\c\5\b\b\0\y\z\e\v\y\s\r\b\b\4\7\7\h\4\5\s\7\d\j\9\b\g\p\6\e\g\6\v\6\y\d\x\4\t\v\r\u\o\b\y\h\3\a\2\k\3\3\p\b\g\v\o\z\h\1\8\7\u\c\y\i\v\b\4\g\d\e\4\f\g\0\w\j\g\m\h\m\d\3\n\t\d\4\8\4\c\c\x\3\f\y\3\9\h\w\j\x\d\h\6\e\8\v\a\t\z\i\j\v\c\v\7\2\x\j\l\6\f\s\t\1\x\b\x\s\g\e\j\i\e\w\8\0\p\3\o\h\7\8\9\y\g\m\0\p\c\e\6\m\l\v\5\g\a\5\c\8\i\9\2\o\m\e\x\h\z\i\l\f\r\y\n\b\w\v\p\6\3\j\8\k\p\1\o\t\5\s\d\b\z\u\1\f\x\3\e\q\c\x\x\m\f\v\8\e\j\z\e\n\2\f\z\u\s\p\g\2\9\e\a\p\u\a\i\r\a\k\9\n\s\i\u\8\t\r\a\m\z\3\q\s\8\h\m\1\0\d\r\h\e\x\p\r\p\v\h\l\c\g\t\4\g\i\4\e\5\v\2\7\v\v\6\p\9\0\u\i\a\p\7\r\h\o\g\x\a\x\c\u\h\w\e\z\n\m\t\r\v\l\u\4\w\s\z\d\v\s\y\s\o\w\x\p\4\b\z\9\w\w\p\x\q\4\j\i\2\d\c\0\s\3\w\9\e\d\y\o\t\6\i\a\i\v\h\f\0\4\0\1\c\0\4\i\t\m\r\2\s\i\r\5\m\2\b\w\c\i\4\v\w\3\9\0\3\0\1\n\t\o\y\x\q\h\9\2\x\y\f\k\m\3\9\x\y\3\2\l\3\v\8\d\y\l\n\o\2\q\5\g\2\6\d\d\u\2\8\5\b\h\u\o\7\u\p\9\l\h\r\k\g\5\z\u\x\y\j\9\h\2\k\x\h\l\k\1\v\a\7\3\9\c\4\p\x\o\q\g\v\f\4\r\w\6\9\5\7\b\5\2\l\r\s\4\w\i\p\u\n\8\n\r\t\u\p\0\g\z\n\u\9\n\a\f\a\o\8\2\s\e\z\w\l\q\7\e\2\3\h\b\t\n\d\h\y\b\m\8\1\9\o\3\6\z\r\e\9\o\9\s\0\i\6\i\k\l\t\c\4\p\c\n\j\j\l\n\l\2\o\a\n\r\m\n\b\z\g\h\v\t\h\b\0\i\4\p\y\e\q\k\f\g\j\s\w\5\5\7\e\f\k\8\b\m\t\n\q\3\f\5\9\b\0\7\6\m\3\d\j\v\7\d\n\a\a\m\v\d\j\r\8\v\f\5\h\q\i\b\2\5\l\d\f\c\6\j\3\2\3\k\7\i\f\1\v\c\k\b\q\s\0\c\i\s\w\c\g\3\p\v\6\u\a\c\1\t\h\j\w\t\q\1\t\4\z\n\f\d\q\1\l\8\d\6\u\s\e\6\0\2\a\g\o\o\1\z\i\l\u\k\6\z\v\5\a\8\a\h\q\u\l\2\f\l\w\e\a\2\m\6\0\0\x\m\v\y\i\7\1\f\d\9\1\4\q\o\m\x\e\6\o\4\9\y\i\s\4\0\j\j\h\n\7\t\r\h\k\p\a\g\x\4\5\t\7\u\e\n\3\p\s\n\x\o\j\i\i\6\h\u\w\u\3\l\9\x\2\o\r\c\s\6\e\i\f\s\y\3\e\h\t\a\e\3\z\p\u\f\v\2\r\2\c\p\g\i\0\i\5\c\w\z\m\o\1\5\r\w\n\w\y\l\w\f\3\q\b\j\u\x\g\3\p\j\f\q\6\p\r\b\o\7\h\s\w\i\h\q\8\l\l\1\4\u\p\y\f\a\a\y\j\c\d\e\g\p\i\b\o\i\j\t\u\k\f\b\s\c\o\a\j\6\8\0\w\4\k\4\g\a\1\y\y\6\v\3\p\7\o\1\h\q\6\6\r\d\x\i\1\k\z\h\d\4\p\u\q\f\9\d\k\l\7\c\1\9\h\w\v\s\n\c\n\i\c\3\e\w\6\l\o\e\n\3\l\p\i\u\t\3\2\m\9\j\m\3\0\0\y\j\h\8\t\z\o\a\g\5\z\b\x\0\e\o\t\j\a\e\0\6\f\d\r\p\b\9\4\b\t\t\5\x\k\8\s\3\h\u\v\h\0\t\y\l\r\c\m\v\o\u\3\l\w\d\8\v\9\s\i\8\b\m\4\m\w\m\h\f\c\9\m\c\1\7\w\x\z\t\t\l\v\1\b\y\z\h\j\7\i\4\c\9\j\8\g\t\h\7\f\q\o\l\e\1\2\5\6\s\s\s\l\1\u\z\g\a\1\z\7\g\7\8\s\g\j\b\g\q\x\b\2\t\d\i\s\g\t\v\7\9\g\q\q\r\g\l\e\u\j\p\t\w\3\d\p\l\s\e\z\w\v\n\p\r\9\i\d\i\6\f\8\p\7\4\x\w\9\h\7\n\p\n\d\t\0\v\g\d\l\f\u\i\0\9\h\0\s\t\v\h\k\z\t\l\x\x\p\6\0\k\q\p\h\s\2\8\u\o\y\g\q\y\4\g\1\7\3\d\r\1\o\8\4\y\j\u\k\6\j\d\x\3\5\a\5\1\6\3\e\6\q\u\n\6\e\5\n\r\k\b\8\c\z\k\t\y\m\3\1\k\x\5\p\1\t\5\o\9\q\9\v\p\s\c\n\1\5\j\v\f\z\y\t\2\r\k\q\0\n\o\0\t\n\p\7\f\h\o\0\y\d\i\7\6\l\q\t\t\n\h\a\w\j\1\b\8\c\5\m\7\x\2\7\n\4\u\s\w\e\p\8\p\d\9\x\j\3\u\n\f\x\6\z\3\v\b\q\u\x\z\z\t\o\k\a\e\r\n\v\s\r\s\w\m\i\o\x\k\l\f\r\b\x\g\b\t\1\c\9\9\b\a\7\f\o\r\c\q\a\r\y\z\j\g\m\r\9\m\u\1\f\j\q\e\x\3\8\p\1\r\j\n\9\o\r\y\e\x\7\r\k\3\4\4\v\y\d\a\t\k\i\d\d\3\9\v\y\u\0\y\r\2\e\o\e\e\0\1\e\7\5\m\r\2\5\2\r\u\7\w\f\r\e\m\9\o\a\j\x\l\s\6\u\k\1\6\j\f\x\s\u\d\w\c\s\v\s\y\t\t\f\y\i\5\m\b\t\i\7\e\g\z\m\c\5\h\j\m\s\c\u\4\a\p\7\a\h\m\9\b\6\f\c\3\y\k\8\d\d\o\c\3\l\f\7\3\t\2\i\v\7\y\1\t\0\1\7\8\9\w\9\l\b\9\k\m\u\g\d\n\b\c\b\8\4\8\3\9\m\w\7\4\k\2\k\d\9\t\a\d\q\j\b\b\0\i\0\c\i\0\8\q\o\d\m\q\t\q\d\3\l\7\p\x\9\d\c\3\d\s\s\k\y\c\6\r\p\j\o\g\d\q\x\9\t\b\a\z\6\4\w\k\q\y\c\v\8\7\f\s\o\2\p\p\b\5\f\x\l\1\z\9\n\0\n\e\x\5\n\9\l\y\p\i\b\y\9\w\j\v\j\0\g\9\n\j\8\r\1\l\6\j\h\f\2\9\s\x\6\k\v\o\1\u\i\b\v\p\l\a\r\l\d\s\2\5\3\o\u\w\x\m\6\8\r\i\p\1\4\j\i\2\1\z\c\i\l\d\i\x\8\r\s\n\m\e\t\p\c\q\h\g\w\h\q\9\n\g\4\j\b\i\o\s\5\g\5\w\n\a\q\m\3\x\o\3\x\f\m\y\2\r\u\t\m\o\j\p\9\2\h\0\2\j\7\z\g\j\7\s\i\e\z\6\f\w\s\h\6\m\z\m\4\c\j\9\j\a\2\7\s\p\z\d\w\f\5\9\o\c\p\1\3\3\p\i\i\6\l\e\2\z\h\y\m\7\w\x\y\u\i\u\r\f\j\y\w\b\8\6\o\e\3\i\0\g\h\0\b\z\0\j\o\m\u\b\o\m\k\p\7\9\3\a\l\u\t\7\m\4\e\c\q\w\n\l\3\t\6\w\u\v\n\z\x\6\k\l\x\k\s\d\x\o\g\n\2\d\0\8\7\q\f\4\m\5\h\d\x\9\3\2\7\3\6\y\d\j\1\5\t\6\9\p\8\8\w\6\t\d\q\b\k\7\l\v\n\0\l\r\v\7\c\4\k\i\h\d\m\f\4\n\a\m\0\3\d\0\3\8\n\p\5\k\u\p\0\f\8\3\1\g\p\i\t\r\8\q\9\k\9\s\1\k\2\h\i\x\4\2\l\a\4\z\f\b\m\z\r\f\t\a\q\i\0\t\i\j\q\h\0\s\g\e\o\5\u\4\8\h\r\m\s\8\a\0\k\e\8\z\b\q\n\v\j\o\i\b\a\k\0\u\8\a\i\3\v\y\i\u\z\o\f\z\j\g\o\x\3\t\c\f\y\b\r\q\a\e\x\y\7\e\a\r\h\4\d\c\b\q\s\b\5\6\t\q\n\y\z\i\b\x\z\g\h\2\8\c\d\y\n\w\i\a\f\n\s\x\w\3\v\9\o\4\f\0\p\5\a\y\k\p\t\7\s\y\8\q\l\d\s\g\a\y\h\q\7\7\s\4\e\g\4\6\e\0\j\n\c\v\y\3\w\7\d\l\u\4\9\q\r\i\z\d\4\n\a\m\z\z\k\m\1\5\4\z\v\r\0\y\3\c\z\h\z\c\r\d\v\n\d\z\g\4\w\m\e\1\x\n\1\o\j\2\f\6\s\q\8\k\0\5\7\5\t\h\0\z\o\2\9\g\b\w\x\c\4\7\i\c\d\d\4\6\h\m\2\x\b\g\e\v\5\g\l\s\h\h\w\d\n\v\e\z\a\v\k\a\c\v\j\n\4\t\2\f\n\h\f\8\j\h\o\w\s\n\q\h\m\0\m\r\u\4\e\p\3\o\c\h\5\x\e\7\0\j\w\s\m\y\4\x\a\z\6\6\a\6\x\v\t\f\1 ]] 00:06:00.044 00:06:00.044 real 0m0.969s 00:06:00.044 user 0m0.668s 00:06:00.044 sys 0m0.374s 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:06:00.044 09:28:45 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.044 { 00:06:00.044 "subsystems": [ 00:06:00.044 { 00:06:00.044 "subsystem": "bdev", 00:06:00.044 "config": [ 00:06:00.044 { 00:06:00.044 "params": { 00:06:00.044 "trtype": "pcie", 00:06:00.044 "traddr": "0000:00:10.0", 00:06:00.044 "name": "Nvme0" 00:06:00.044 }, 00:06:00.044 "method": "bdev_nvme_attach_controller" 00:06:00.044 }, 00:06:00.044 { 00:06:00.044 "method": "bdev_wait_for_examine" 00:06:00.044 } 00:06:00.044 ] 00:06:00.044 } 00:06:00.044 ] 00:06:00.044 } 00:06:00.044 [2024-11-05 09:28:45.907924] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:00.044 [2024-11-05 09:28:45.908010] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59952 ] 00:06:00.304 [2024-11-05 09:28:46.050922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.304 [2024-11-05 09:28:46.078227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.304 [2024-11-05 09:28:46.104941] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:00.304  [2024-11-05T09:28:46.521Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:06:00.563 00:06:00.563 09:28:46 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.563 ************************************ 00:06:00.563 END TEST spdk_dd_basic_rw 00:06:00.563 ************************************ 00:06:00.563 00:06:00.563 real 0m13.916s 00:06:00.563 user 0m10.107s 00:06:00.563 sys 0m4.380s 00:06:00.563 09:28:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:00.563 09:28:46 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:06:00.563 09:28:46 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:00.563 09:28:46 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.563 09:28:46 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.563 09:28:46 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:00.563 ************************************ 00:06:00.563 START TEST spdk_dd_posix 00:06:00.563 ************************************ 00:06:00.563 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:06:00.563 * Looking for test storage... 00:06:00.563 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:00.563 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:00.563 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lcov --version 00:06:00.563 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.823 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:00.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.824 --rc genhtml_branch_coverage=1 00:06:00.824 --rc genhtml_function_coverage=1 00:06:00.824 --rc genhtml_legend=1 00:06:00.824 --rc geninfo_all_blocks=1 00:06:00.824 --rc geninfo_unexecuted_blocks=1 00:06:00.824 00:06:00.824 ' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:00.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.824 --rc genhtml_branch_coverage=1 00:06:00.824 --rc genhtml_function_coverage=1 00:06:00.824 --rc genhtml_legend=1 00:06:00.824 --rc geninfo_all_blocks=1 00:06:00.824 --rc geninfo_unexecuted_blocks=1 00:06:00.824 00:06:00.824 ' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:00.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.824 --rc genhtml_branch_coverage=1 00:06:00.824 --rc genhtml_function_coverage=1 00:06:00.824 --rc genhtml_legend=1 00:06:00.824 --rc geninfo_all_blocks=1 00:06:00.824 --rc geninfo_unexecuted_blocks=1 00:06:00.824 00:06:00.824 ' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:00.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.824 --rc genhtml_branch_coverage=1 00:06:00.824 --rc genhtml_function_coverage=1 00:06:00.824 --rc genhtml_legend=1 00:06:00.824 --rc geninfo_all_blocks=1 00:06:00.824 --rc geninfo_unexecuted_blocks=1 00:06:00.824 00:06:00.824 ' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:06:00.824 * First test run, liburing in use 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:00.824 ************************************ 00:06:00.824 START TEST dd_flag_append 00:06:00.824 ************************************ 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1127 -- # append 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=i3h5h450kk03ytmo4okqz9z621qoo2ht 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=flpllspn81nk5f6vv54sb267wsvuw4fz 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s i3h5h450kk03ytmo4okqz9z621qoo2ht 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s flpllspn81nk5f6vv54sb267wsvuw4fz 00:06:00.824 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:00.824 [2024-11-05 09:28:46.633577] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:00.824 [2024-11-05 09:28:46.633670] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60024 ] 00:06:00.824 [2024-11-05 09:28:46.778563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.084 [2024-11-05 09:28:46.810056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.084 [2024-11-05 09:28:46.836678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.084  [2024-11-05T09:28:47.042Z] Copying: 32/32 [B] (average 31 kBps) 00:06:01.084 00:06:01.084 ************************************ 00:06:01.084 END TEST dd_flag_append 00:06:01.085 ************************************ 00:06:01.085 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ flpllspn81nk5f6vv54sb267wsvuw4fzi3h5h450kk03ytmo4okqz9z621qoo2ht == \f\l\p\l\l\s\p\n\8\1\n\k\5\f\6\v\v\5\4\s\b\2\6\7\w\s\v\u\w\4\f\z\i\3\h\5\h\4\5\0\k\k\0\3\y\t\m\o\4\o\k\q\z\9\z\6\2\1\q\o\o\2\h\t ]] 00:06:01.085 00:06:01.085 real 0m0.397s 00:06:01.085 user 0m0.189s 00:06:01.085 sys 0m0.170s 00:06:01.085 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.085 09:28:46 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:01.085 ************************************ 00:06:01.085 START TEST dd_flag_directory 00:06:01.085 ************************************ 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1127 -- # directory 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:01.085 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:01.344 [2024-11-05 09:28:47.073533] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:01.344 [2024-11-05 09:28:47.073611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60047 ] 00:06:01.344 [2024-11-05 09:28:47.214275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.344 [2024-11-05 09:28:47.241088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.344 [2024-11-05 09:28:47.267762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.344 [2024-11-05 09:28:47.284650] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:01.344 [2024-11-05 09:28:47.284700] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:01.344 [2024-11-05 09:28:47.284731] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.604 [2024-11-05 09:28:47.342719] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:01.604 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:01.604 [2024-11-05 09:28:47.447085] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:01.604 [2024-11-05 09:28:47.447172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60062 ] 00:06:01.864 [2024-11-05 09:28:47.590784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.864 [2024-11-05 09:28:47.617814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.864 [2024-11-05 09:28:47.644166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.864 [2024-11-05 09:28:47.660450] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:01.864 [2024-11-05 09:28:47.660499] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:01.864 [2024-11-05 09:28:47.660531] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:01.864 [2024-11-05 09:28:47.719537] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:01.864 00:06:01.864 real 0m0.753s 00:06:01.864 user 0m0.387s 00:06:01.864 sys 0m0.158s 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.864 ************************************ 00:06:01.864 END TEST dd_flag_directory 00:06:01.864 ************************************ 00:06:01.864 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:06:01.865 09:28:47 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:06:01.865 09:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.865 09:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.865 09:28:47 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:02.125 ************************************ 00:06:02.125 START TEST dd_flag_nofollow 00:06:02.125 ************************************ 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1127 -- # nofollow 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.125 09:28:47 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.125 [2024-11-05 09:28:47.888762] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:02.125 [2024-11-05 09:28:47.888862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60085 ] 00:06:02.125 [2024-11-05 09:28:48.033669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.125 [2024-11-05 09:28:48.060541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.385 [2024-11-05 09:28:48.087423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.385 [2024-11-05 09:28:48.105234] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:02.385 [2024-11-05 09:28:48.105287] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:02.385 [2024-11-05 09:28:48.105336] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.385 [2024-11-05 09:28:48.166850] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:02.385 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:02.385 [2024-11-05 09:28:48.278879] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:02.385 [2024-11-05 09:28:48.279116] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60100 ] 00:06:02.644 [2024-11-05 09:28:48.425523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.644 [2024-11-05 09:28:48.452388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.644 [2024-11-05 09:28:48.478986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:02.645 [2024-11-05 09:28:48.495657] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:02.645 [2024-11-05 09:28:48.496023] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:02.645 [2024-11-05 09:28:48.496149] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:02.645 [2024-11-05 09:28:48.553242] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:06:02.645 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:06:02.904 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:02.904 09:28:48 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:02.904 [2024-11-05 09:28:48.668140] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:02.904 [2024-11-05 09:28:48.668418] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60102 ] 00:06:02.904 [2024-11-05 09:28:48.814811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.904 [2024-11-05 09:28:48.846749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.163 [2024-11-05 09:28:48.874668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:03.163  [2024-11-05T09:28:49.121Z] Copying: 512/512 [B] (average 500 kBps) 00:06:03.163 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ xw094lqv6oll65a14ud0hj6daen2tj6fgji4i9aemqs3c2axrjpnlrdm5q54o17zt519j51y1ous5kuqunw0pkwk7lgw5vrii2npkiqtedpqwth8ghw32kn0o44huyfpwkz5nh1rrovim20oakubam7ca1m6672g2i3f3w0uzikxeziyg50qsc4plyphcqwnsydk436esrwnpu6xtmuy0zsdknam6q8sl9x5lp8vcu9z6bqlxsuw44hccawvg7owi1ad04eu6nkoq8eyzf3241jofk90r1b6yn6b63u2v1sg5085e7kuannoz66tr2mx4zv9tkembcivrdv4w9haldkk9h9meq48wm9l344jxo6ljydwuiy6wvq80plhlmytig9mn9pn7hzgwdgsr69tl3k7jpgxkqko10irqy362ddb80weikb5sj37td92np0wfxj1uaac7537f79f3uo4xha6vc03yyabtegvfld73k9f0cbavde7sa8al5bmoguq == \x\w\0\9\4\l\q\v\6\o\l\l\6\5\a\1\4\u\d\0\h\j\6\d\a\e\n\2\t\j\6\f\g\j\i\4\i\9\a\e\m\q\s\3\c\2\a\x\r\j\p\n\l\r\d\m\5\q\5\4\o\1\7\z\t\5\1\9\j\5\1\y\1\o\u\s\5\k\u\q\u\n\w\0\p\k\w\k\7\l\g\w\5\v\r\i\i\2\n\p\k\i\q\t\e\d\p\q\w\t\h\8\g\h\w\3\2\k\n\0\o\4\4\h\u\y\f\p\w\k\z\5\n\h\1\r\r\o\v\i\m\2\0\o\a\k\u\b\a\m\7\c\a\1\m\6\6\7\2\g\2\i\3\f\3\w\0\u\z\i\k\x\e\z\i\y\g\5\0\q\s\c\4\p\l\y\p\h\c\q\w\n\s\y\d\k\4\3\6\e\s\r\w\n\p\u\6\x\t\m\u\y\0\z\s\d\k\n\a\m\6\q\8\s\l\9\x\5\l\p\8\v\c\u\9\z\6\b\q\l\x\s\u\w\4\4\h\c\c\a\w\v\g\7\o\w\i\1\a\d\0\4\e\u\6\n\k\o\q\8\e\y\z\f\3\2\4\1\j\o\f\k\9\0\r\1\b\6\y\n\6\b\6\3\u\2\v\1\s\g\5\0\8\5\e\7\k\u\a\n\n\o\z\6\6\t\r\2\m\x\4\z\v\9\t\k\e\m\b\c\i\v\r\d\v\4\w\9\h\a\l\d\k\k\9\h\9\m\e\q\4\8\w\m\9\l\3\4\4\j\x\o\6\l\j\y\d\w\u\i\y\6\w\v\q\8\0\p\l\h\l\m\y\t\i\g\9\m\n\9\p\n\7\h\z\g\w\d\g\s\r\6\9\t\l\3\k\7\j\p\g\x\k\q\k\o\1\0\i\r\q\y\3\6\2\d\d\b\8\0\w\e\i\k\b\5\s\j\3\7\t\d\9\2\n\p\0\w\f\x\j\1\u\a\a\c\7\5\3\7\f\7\9\f\3\u\o\4\x\h\a\6\v\c\0\3\y\y\a\b\t\e\g\v\f\l\d\7\3\k\9\f\0\c\b\a\v\d\e\7\s\a\8\a\l\5\b\m\o\g\u\q ]] 00:06:03.163 00:06:03.163 real 0m1.177s 00:06:03.163 user 0m0.593s 00:06:03.163 sys 0m0.335s 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:06:03.163 ************************************ 00:06:03.163 END TEST dd_flag_nofollow 00:06:03.163 ************************************ 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:03.163 ************************************ 00:06:03.163 START TEST dd_flag_noatime 00:06:03.163 ************************************ 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1127 -- # noatime 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1730798928 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1730798929 00:06:03.163 09:28:49 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:06:04.541 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.541 [2024-11-05 09:28:50.136104] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:04.541 [2024-11-05 09:28:50.136382] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60139 ] 00:06:04.541 [2024-11-05 09:28:50.286275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.541 [2024-11-05 09:28:50.325892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.541 [2024-11-05 09:28:50.359016] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:04.541  [2024-11-05T09:28:50.499Z] Copying: 512/512 [B] (average 500 kBps) 00:06:04.541 00:06:04.541 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:04.800 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1730798928 )) 00:06:04.800 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.800 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1730798929 )) 00:06:04.800 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:04.800 [2024-11-05 09:28:50.560805] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:04.800 [2024-11-05 09:28:50.560916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60158 ] 00:06:04.800 [2024-11-05 09:28:50.707855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.800 [2024-11-05 09:28:50.735115] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.059 [2024-11-05 09:28:50.762619] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.059  [2024-11-05T09:28:51.017Z] Copying: 512/512 [B] (average 500 kBps) 00:06:05.059 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1730798930 )) 00:06:05.059 00:06:05.059 real 0m1.841s 00:06:05.059 user 0m0.420s 00:06:05.059 sys 0m0.375s 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:05.059 ************************************ 00:06:05.059 END TEST dd_flag_noatime 00:06:05.059 ************************************ 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:05.059 ************************************ 00:06:05.059 START TEST dd_flags_misc 00:06:05.059 ************************************ 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1127 -- # io 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.059 09:28:50 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:05.059 [2024-11-05 09:28:51.014450] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:05.059 [2024-11-05 09:28:51.014540] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60181 ] 00:06:05.318 [2024-11-05 09:28:51.157810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.318 [2024-11-05 09:28:51.187536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.318 [2024-11-05 09:28:51.214211] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.318  [2024-11-05T09:28:51.535Z] Copying: 512/512 [B] (average 500 kBps) 00:06:05.577 00:06:05.577 09:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 69fh9oxxi4mxi1d0wbf4f37ll1d9x5jkinihdr1w0mlmf7xpgr3d5dlvmlk0ajnz6pz91ekjixzkzq1qbewddizpds0qxyhkgs2f6k4u0koleiw2fnzz7wpof9dn8jwdojs64jhqmev9zkw8t7i0asrnxajvwmz1teq6rlf3h3kzxx501ompdvl3s1f7v3igcjznxc53rcwlcvkjv4gae3xe791xplv0801so73xj7ij6uxuypfjz568jx6rrdhcub9ulax3xf7h9hb70ojpjrmzcxbi2fkzr6qcdm50qnd561v6bgmbvv0gnqc5l88oncecgztrkh08868khcv6d84erlralcmwfdp6pblv8tuapwjukrzgvto2ytf3arr5a0kcm4zbef39vurm5lb6i67wwviq6w87l7w04lahzly30tfgdtlf03w51hjzlt1ryh57m0fnc2w0n110m437sl56nzeprimp9iqtxuznp7n35db55i9hc8f4wswbg53l == \6\9\f\h\9\o\x\x\i\4\m\x\i\1\d\0\w\b\f\4\f\3\7\l\l\1\d\9\x\5\j\k\i\n\i\h\d\r\1\w\0\m\l\m\f\7\x\p\g\r\3\d\5\d\l\v\m\l\k\0\a\j\n\z\6\p\z\9\1\e\k\j\i\x\z\k\z\q\1\q\b\e\w\d\d\i\z\p\d\s\0\q\x\y\h\k\g\s\2\f\6\k\4\u\0\k\o\l\e\i\w\2\f\n\z\z\7\w\p\o\f\9\d\n\8\j\w\d\o\j\s\6\4\j\h\q\m\e\v\9\z\k\w\8\t\7\i\0\a\s\r\n\x\a\j\v\w\m\z\1\t\e\q\6\r\l\f\3\h\3\k\z\x\x\5\0\1\o\m\p\d\v\l\3\s\1\f\7\v\3\i\g\c\j\z\n\x\c\5\3\r\c\w\l\c\v\k\j\v\4\g\a\e\3\x\e\7\9\1\x\p\l\v\0\8\0\1\s\o\7\3\x\j\7\i\j\6\u\x\u\y\p\f\j\z\5\6\8\j\x\6\r\r\d\h\c\u\b\9\u\l\a\x\3\x\f\7\h\9\h\b\7\0\o\j\p\j\r\m\z\c\x\b\i\2\f\k\z\r\6\q\c\d\m\5\0\q\n\d\5\6\1\v\6\b\g\m\b\v\v\0\g\n\q\c\5\l\8\8\o\n\c\e\c\g\z\t\r\k\h\0\8\8\6\8\k\h\c\v\6\d\8\4\e\r\l\r\a\l\c\m\w\f\d\p\6\p\b\l\v\8\t\u\a\p\w\j\u\k\r\z\g\v\t\o\2\y\t\f\3\a\r\r\5\a\0\k\c\m\4\z\b\e\f\3\9\v\u\r\m\5\l\b\6\i\6\7\w\w\v\i\q\6\w\8\7\l\7\w\0\4\l\a\h\z\l\y\3\0\t\f\g\d\t\l\f\0\3\w\5\1\h\j\z\l\t\1\r\y\h\5\7\m\0\f\n\c\2\w\0\n\1\1\0\m\4\3\7\s\l\5\6\n\z\e\p\r\i\m\p\9\i\q\t\x\u\z\n\p\7\n\3\5\d\b\5\5\i\9\h\c\8\f\4\w\s\w\b\g\5\3\l ]] 00:06:05.577 09:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.577 09:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:05.577 [2024-11-05 09:28:51.397159] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:05.577 [2024-11-05 09:28:51.397248] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60196 ] 00:06:05.577 [2024-11-05 09:28:51.533020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.836 [2024-11-05 09:28:51.560849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.836 [2024-11-05 09:28:51.587049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:05.836  [2024-11-05T09:28:51.794Z] Copying: 512/512 [B] (average 500 kBps) 00:06:05.836 00:06:05.836 09:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 69fh9oxxi4mxi1d0wbf4f37ll1d9x5jkinihdr1w0mlmf7xpgr3d5dlvmlk0ajnz6pz91ekjixzkzq1qbewddizpds0qxyhkgs2f6k4u0koleiw2fnzz7wpof9dn8jwdojs64jhqmev9zkw8t7i0asrnxajvwmz1teq6rlf3h3kzxx501ompdvl3s1f7v3igcjznxc53rcwlcvkjv4gae3xe791xplv0801so73xj7ij6uxuypfjz568jx6rrdhcub9ulax3xf7h9hb70ojpjrmzcxbi2fkzr6qcdm50qnd561v6bgmbvv0gnqc5l88oncecgztrkh08868khcv6d84erlralcmwfdp6pblv8tuapwjukrzgvto2ytf3arr5a0kcm4zbef39vurm5lb6i67wwviq6w87l7w04lahzly30tfgdtlf03w51hjzlt1ryh57m0fnc2w0n110m437sl56nzeprimp9iqtxuznp7n35db55i9hc8f4wswbg53l == \6\9\f\h\9\o\x\x\i\4\m\x\i\1\d\0\w\b\f\4\f\3\7\l\l\1\d\9\x\5\j\k\i\n\i\h\d\r\1\w\0\m\l\m\f\7\x\p\g\r\3\d\5\d\l\v\m\l\k\0\a\j\n\z\6\p\z\9\1\e\k\j\i\x\z\k\z\q\1\q\b\e\w\d\d\i\z\p\d\s\0\q\x\y\h\k\g\s\2\f\6\k\4\u\0\k\o\l\e\i\w\2\f\n\z\z\7\w\p\o\f\9\d\n\8\j\w\d\o\j\s\6\4\j\h\q\m\e\v\9\z\k\w\8\t\7\i\0\a\s\r\n\x\a\j\v\w\m\z\1\t\e\q\6\r\l\f\3\h\3\k\z\x\x\5\0\1\o\m\p\d\v\l\3\s\1\f\7\v\3\i\g\c\j\z\n\x\c\5\3\r\c\w\l\c\v\k\j\v\4\g\a\e\3\x\e\7\9\1\x\p\l\v\0\8\0\1\s\o\7\3\x\j\7\i\j\6\u\x\u\y\p\f\j\z\5\6\8\j\x\6\r\r\d\h\c\u\b\9\u\l\a\x\3\x\f\7\h\9\h\b\7\0\o\j\p\j\r\m\z\c\x\b\i\2\f\k\z\r\6\q\c\d\m\5\0\q\n\d\5\6\1\v\6\b\g\m\b\v\v\0\g\n\q\c\5\l\8\8\o\n\c\e\c\g\z\t\r\k\h\0\8\8\6\8\k\h\c\v\6\d\8\4\e\r\l\r\a\l\c\m\w\f\d\p\6\p\b\l\v\8\t\u\a\p\w\j\u\k\r\z\g\v\t\o\2\y\t\f\3\a\r\r\5\a\0\k\c\m\4\z\b\e\f\3\9\v\u\r\m\5\l\b\6\i\6\7\w\w\v\i\q\6\w\8\7\l\7\w\0\4\l\a\h\z\l\y\3\0\t\f\g\d\t\l\f\0\3\w\5\1\h\j\z\l\t\1\r\y\h\5\7\m\0\f\n\c\2\w\0\n\1\1\0\m\4\3\7\s\l\5\6\n\z\e\p\r\i\m\p\9\i\q\t\x\u\z\n\p\7\n\3\5\d\b\5\5\i\9\h\c\8\f\4\w\s\w\b\g\5\3\l ]] 00:06:05.836 09:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:05.836 09:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:05.836 [2024-11-05 09:28:51.777328] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:05.836 [2024-11-05 09:28:51.777446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60200 ] 00:06:06.094 [2024-11-05 09:28:51.918515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.094 [2024-11-05 09:28:51.945611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.094 [2024-11-05 09:28:51.971842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.094  [2024-11-05T09:28:52.310Z] Copying: 512/512 [B] (average 100 kBps) 00:06:06.352 00:06:06.353 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 69fh9oxxi4mxi1d0wbf4f37ll1d9x5jkinihdr1w0mlmf7xpgr3d5dlvmlk0ajnz6pz91ekjixzkzq1qbewddizpds0qxyhkgs2f6k4u0koleiw2fnzz7wpof9dn8jwdojs64jhqmev9zkw8t7i0asrnxajvwmz1teq6rlf3h3kzxx501ompdvl3s1f7v3igcjznxc53rcwlcvkjv4gae3xe791xplv0801so73xj7ij6uxuypfjz568jx6rrdhcub9ulax3xf7h9hb70ojpjrmzcxbi2fkzr6qcdm50qnd561v6bgmbvv0gnqc5l88oncecgztrkh08868khcv6d84erlralcmwfdp6pblv8tuapwjukrzgvto2ytf3arr5a0kcm4zbef39vurm5lb6i67wwviq6w87l7w04lahzly30tfgdtlf03w51hjzlt1ryh57m0fnc2w0n110m437sl56nzeprimp9iqtxuznp7n35db55i9hc8f4wswbg53l == \6\9\f\h\9\o\x\x\i\4\m\x\i\1\d\0\w\b\f\4\f\3\7\l\l\1\d\9\x\5\j\k\i\n\i\h\d\r\1\w\0\m\l\m\f\7\x\p\g\r\3\d\5\d\l\v\m\l\k\0\a\j\n\z\6\p\z\9\1\e\k\j\i\x\z\k\z\q\1\q\b\e\w\d\d\i\z\p\d\s\0\q\x\y\h\k\g\s\2\f\6\k\4\u\0\k\o\l\e\i\w\2\f\n\z\z\7\w\p\o\f\9\d\n\8\j\w\d\o\j\s\6\4\j\h\q\m\e\v\9\z\k\w\8\t\7\i\0\a\s\r\n\x\a\j\v\w\m\z\1\t\e\q\6\r\l\f\3\h\3\k\z\x\x\5\0\1\o\m\p\d\v\l\3\s\1\f\7\v\3\i\g\c\j\z\n\x\c\5\3\r\c\w\l\c\v\k\j\v\4\g\a\e\3\x\e\7\9\1\x\p\l\v\0\8\0\1\s\o\7\3\x\j\7\i\j\6\u\x\u\y\p\f\j\z\5\6\8\j\x\6\r\r\d\h\c\u\b\9\u\l\a\x\3\x\f\7\h\9\h\b\7\0\o\j\p\j\r\m\z\c\x\b\i\2\f\k\z\r\6\q\c\d\m\5\0\q\n\d\5\6\1\v\6\b\g\m\b\v\v\0\g\n\q\c\5\l\8\8\o\n\c\e\c\g\z\t\r\k\h\0\8\8\6\8\k\h\c\v\6\d\8\4\e\r\l\r\a\l\c\m\w\f\d\p\6\p\b\l\v\8\t\u\a\p\w\j\u\k\r\z\g\v\t\o\2\y\t\f\3\a\r\r\5\a\0\k\c\m\4\z\b\e\f\3\9\v\u\r\m\5\l\b\6\i\6\7\w\w\v\i\q\6\w\8\7\l\7\w\0\4\l\a\h\z\l\y\3\0\t\f\g\d\t\l\f\0\3\w\5\1\h\j\z\l\t\1\r\y\h\5\7\m\0\f\n\c\2\w\0\n\1\1\0\m\4\3\7\s\l\5\6\n\z\e\p\r\i\m\p\9\i\q\t\x\u\z\n\p\7\n\3\5\d\b\5\5\i\9\h\c\8\f\4\w\s\w\b\g\5\3\l ]] 00:06:06.353 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:06.353 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:06.353 [2024-11-05 09:28:52.169542] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:06.353 [2024-11-05 09:28:52.169797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60204 ] 00:06:06.610 [2024-11-05 09:28:52.314818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.610 [2024-11-05 09:28:52.344525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.610 [2024-11-05 09:28:52.370703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.610  [2024-11-05T09:28:52.568Z] Copying: 512/512 [B] (average 500 kBps) 00:06:06.610 00:06:06.610 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 69fh9oxxi4mxi1d0wbf4f37ll1d9x5jkinihdr1w0mlmf7xpgr3d5dlvmlk0ajnz6pz91ekjixzkzq1qbewddizpds0qxyhkgs2f6k4u0koleiw2fnzz7wpof9dn8jwdojs64jhqmev9zkw8t7i0asrnxajvwmz1teq6rlf3h3kzxx501ompdvl3s1f7v3igcjznxc53rcwlcvkjv4gae3xe791xplv0801so73xj7ij6uxuypfjz568jx6rrdhcub9ulax3xf7h9hb70ojpjrmzcxbi2fkzr6qcdm50qnd561v6bgmbvv0gnqc5l88oncecgztrkh08868khcv6d84erlralcmwfdp6pblv8tuapwjukrzgvto2ytf3arr5a0kcm4zbef39vurm5lb6i67wwviq6w87l7w04lahzly30tfgdtlf03w51hjzlt1ryh57m0fnc2w0n110m437sl56nzeprimp9iqtxuznp7n35db55i9hc8f4wswbg53l == \6\9\f\h\9\o\x\x\i\4\m\x\i\1\d\0\w\b\f\4\f\3\7\l\l\1\d\9\x\5\j\k\i\n\i\h\d\r\1\w\0\m\l\m\f\7\x\p\g\r\3\d\5\d\l\v\m\l\k\0\a\j\n\z\6\p\z\9\1\e\k\j\i\x\z\k\z\q\1\q\b\e\w\d\d\i\z\p\d\s\0\q\x\y\h\k\g\s\2\f\6\k\4\u\0\k\o\l\e\i\w\2\f\n\z\z\7\w\p\o\f\9\d\n\8\j\w\d\o\j\s\6\4\j\h\q\m\e\v\9\z\k\w\8\t\7\i\0\a\s\r\n\x\a\j\v\w\m\z\1\t\e\q\6\r\l\f\3\h\3\k\z\x\x\5\0\1\o\m\p\d\v\l\3\s\1\f\7\v\3\i\g\c\j\z\n\x\c\5\3\r\c\w\l\c\v\k\j\v\4\g\a\e\3\x\e\7\9\1\x\p\l\v\0\8\0\1\s\o\7\3\x\j\7\i\j\6\u\x\u\y\p\f\j\z\5\6\8\j\x\6\r\r\d\h\c\u\b\9\u\l\a\x\3\x\f\7\h\9\h\b\7\0\o\j\p\j\r\m\z\c\x\b\i\2\f\k\z\r\6\q\c\d\m\5\0\q\n\d\5\6\1\v\6\b\g\m\b\v\v\0\g\n\q\c\5\l\8\8\o\n\c\e\c\g\z\t\r\k\h\0\8\8\6\8\k\h\c\v\6\d\8\4\e\r\l\r\a\l\c\m\w\f\d\p\6\p\b\l\v\8\t\u\a\p\w\j\u\k\r\z\g\v\t\o\2\y\t\f\3\a\r\r\5\a\0\k\c\m\4\z\b\e\f\3\9\v\u\r\m\5\l\b\6\i\6\7\w\w\v\i\q\6\w\8\7\l\7\w\0\4\l\a\h\z\l\y\3\0\t\f\g\d\t\l\f\0\3\w\5\1\h\j\z\l\t\1\r\y\h\5\7\m\0\f\n\c\2\w\0\n\1\1\0\m\4\3\7\s\l\5\6\n\z\e\p\r\i\m\p\9\i\q\t\x\u\z\n\p\7\n\3\5\d\b\5\5\i\9\h\c\8\f\4\w\s\w\b\g\5\3\l ]] 00:06:06.610 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:06.610 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:06:06.611 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:06:06.611 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:06.611 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:06.611 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:06.611 [2024-11-05 09:28:52.561482] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:06.611 [2024-11-05 09:28:52.561572] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60219 ] 00:06:06.869 [2024-11-05 09:28:52.704781] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.869 [2024-11-05 09:28:52.735266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.869 [2024-11-05 09:28:52.768223] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:06.869  [2024-11-05T09:28:53.086Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.128 00:06:07.128 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 67mmsmvc7lwnk0vpi47n81eb64m8ph3m58337v7sz19v0rk7btlrpc79lg6p72cfmgyh66xj5avwcknj8dw6ilxshfs7u1ej5f69dbf2hr0hj1ld2achud6nhljwsyatwdctxooe9hfbpsn2txwjl0g865xn1aq6n9yp6ai1axj7ev0ac51wu6h5za1zvwb8l9ogevaapfox4jlol1wfb6vc0u0xbrcarns1esj07n7svojxpz0qo19y855iyi01u92pyxb63y7uvucrgcdme3disbv0piax1p2ki7mtyw91u0m3gfcuqvqwatug9m1lxb9eayt5swzbgxw2bod4dt4x5t1rgtvfaj19nql8pclekv2q2v61asidlfq6c9o9r4fn0r7z3aibtss24g8jkt5dvt96y6llidasai1v2wyj6kudpo7vyt584wjpgqwpu56ocm4w3s2fcej48elyyqve384n8r2xr2xlvpw6ajnupvtdk5rv2sbh63phchec == \6\7\m\m\s\m\v\c\7\l\w\n\k\0\v\p\i\4\7\n\8\1\e\b\6\4\m\8\p\h\3\m\5\8\3\3\7\v\7\s\z\1\9\v\0\r\k\7\b\t\l\r\p\c\7\9\l\g\6\p\7\2\c\f\m\g\y\h\6\6\x\j\5\a\v\w\c\k\n\j\8\d\w\6\i\l\x\s\h\f\s\7\u\1\e\j\5\f\6\9\d\b\f\2\h\r\0\h\j\1\l\d\2\a\c\h\u\d\6\n\h\l\j\w\s\y\a\t\w\d\c\t\x\o\o\e\9\h\f\b\p\s\n\2\t\x\w\j\l\0\g\8\6\5\x\n\1\a\q\6\n\9\y\p\6\a\i\1\a\x\j\7\e\v\0\a\c\5\1\w\u\6\h\5\z\a\1\z\v\w\b\8\l\9\o\g\e\v\a\a\p\f\o\x\4\j\l\o\l\1\w\f\b\6\v\c\0\u\0\x\b\r\c\a\r\n\s\1\e\s\j\0\7\n\7\s\v\o\j\x\p\z\0\q\o\1\9\y\8\5\5\i\y\i\0\1\u\9\2\p\y\x\b\6\3\y\7\u\v\u\c\r\g\c\d\m\e\3\d\i\s\b\v\0\p\i\a\x\1\p\2\k\i\7\m\t\y\w\9\1\u\0\m\3\g\f\c\u\q\v\q\w\a\t\u\g\9\m\1\l\x\b\9\e\a\y\t\5\s\w\z\b\g\x\w\2\b\o\d\4\d\t\4\x\5\t\1\r\g\t\v\f\a\j\1\9\n\q\l\8\p\c\l\e\k\v\2\q\2\v\6\1\a\s\i\d\l\f\q\6\c\9\o\9\r\4\f\n\0\r\7\z\3\a\i\b\t\s\s\2\4\g\8\j\k\t\5\d\v\t\9\6\y\6\l\l\i\d\a\s\a\i\1\v\2\w\y\j\6\k\u\d\p\o\7\v\y\t\5\8\4\w\j\p\g\q\w\p\u\5\6\o\c\m\4\w\3\s\2\f\c\e\j\4\8\e\l\y\y\q\v\e\3\8\4\n\8\r\2\x\r\2\x\l\v\p\w\6\a\j\n\u\p\v\t\d\k\5\r\v\2\s\b\h\6\3\p\h\c\h\e\c ]] 00:06:07.128 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.128 09:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:07.128 [2024-11-05 09:28:52.949423] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:07.128 [2024-11-05 09:28:52.949514] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60223 ] 00:06:07.387 [2024-11-05 09:28:53.094681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.387 [2024-11-05 09:28:53.122236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.387 [2024-11-05 09:28:53.148720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.387  [2024-11-05T09:28:53.345Z] Copying: 512/512 [B] (average 500 kBps) 00:06:07.387 00:06:07.387 09:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 67mmsmvc7lwnk0vpi47n81eb64m8ph3m58337v7sz19v0rk7btlrpc79lg6p72cfmgyh66xj5avwcknj8dw6ilxshfs7u1ej5f69dbf2hr0hj1ld2achud6nhljwsyatwdctxooe9hfbpsn2txwjl0g865xn1aq6n9yp6ai1axj7ev0ac51wu6h5za1zvwb8l9ogevaapfox4jlol1wfb6vc0u0xbrcarns1esj07n7svojxpz0qo19y855iyi01u92pyxb63y7uvucrgcdme3disbv0piax1p2ki7mtyw91u0m3gfcuqvqwatug9m1lxb9eayt5swzbgxw2bod4dt4x5t1rgtvfaj19nql8pclekv2q2v61asidlfq6c9o9r4fn0r7z3aibtss24g8jkt5dvt96y6llidasai1v2wyj6kudpo7vyt584wjpgqwpu56ocm4w3s2fcej48elyyqve384n8r2xr2xlvpw6ajnupvtdk5rv2sbh63phchec == \6\7\m\m\s\m\v\c\7\l\w\n\k\0\v\p\i\4\7\n\8\1\e\b\6\4\m\8\p\h\3\m\5\8\3\3\7\v\7\s\z\1\9\v\0\r\k\7\b\t\l\r\p\c\7\9\l\g\6\p\7\2\c\f\m\g\y\h\6\6\x\j\5\a\v\w\c\k\n\j\8\d\w\6\i\l\x\s\h\f\s\7\u\1\e\j\5\f\6\9\d\b\f\2\h\r\0\h\j\1\l\d\2\a\c\h\u\d\6\n\h\l\j\w\s\y\a\t\w\d\c\t\x\o\o\e\9\h\f\b\p\s\n\2\t\x\w\j\l\0\g\8\6\5\x\n\1\a\q\6\n\9\y\p\6\a\i\1\a\x\j\7\e\v\0\a\c\5\1\w\u\6\h\5\z\a\1\z\v\w\b\8\l\9\o\g\e\v\a\a\p\f\o\x\4\j\l\o\l\1\w\f\b\6\v\c\0\u\0\x\b\r\c\a\r\n\s\1\e\s\j\0\7\n\7\s\v\o\j\x\p\z\0\q\o\1\9\y\8\5\5\i\y\i\0\1\u\9\2\p\y\x\b\6\3\y\7\u\v\u\c\r\g\c\d\m\e\3\d\i\s\b\v\0\p\i\a\x\1\p\2\k\i\7\m\t\y\w\9\1\u\0\m\3\g\f\c\u\q\v\q\w\a\t\u\g\9\m\1\l\x\b\9\e\a\y\t\5\s\w\z\b\g\x\w\2\b\o\d\4\d\t\4\x\5\t\1\r\g\t\v\f\a\j\1\9\n\q\l\8\p\c\l\e\k\v\2\q\2\v\6\1\a\s\i\d\l\f\q\6\c\9\o\9\r\4\f\n\0\r\7\z\3\a\i\b\t\s\s\2\4\g\8\j\k\t\5\d\v\t\9\6\y\6\l\l\i\d\a\s\a\i\1\v\2\w\y\j\6\k\u\d\p\o\7\v\y\t\5\8\4\w\j\p\g\q\w\p\u\5\6\o\c\m\4\w\3\s\2\f\c\e\j\4\8\e\l\y\y\q\v\e\3\8\4\n\8\r\2\x\r\2\x\l\v\p\w\6\a\j\n\u\p\v\t\d\k\5\r\v\2\s\b\h\6\3\p\h\c\h\e\c ]] 00:06:07.387 09:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.387 09:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:07.387 [2024-11-05 09:28:53.335517] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:07.387 [2024-11-05 09:28:53.335761] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60227 ] 00:06:07.647 [2024-11-05 09:28:53.481396] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.647 [2024-11-05 09:28:53.510146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.647 [2024-11-05 09:28:53.537625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:07.647  [2024-11-05T09:28:53.864Z] Copying: 512/512 [B] (average 250 kBps) 00:06:07.906 00:06:07.906 09:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 67mmsmvc7lwnk0vpi47n81eb64m8ph3m58337v7sz19v0rk7btlrpc79lg6p72cfmgyh66xj5avwcknj8dw6ilxshfs7u1ej5f69dbf2hr0hj1ld2achud6nhljwsyatwdctxooe9hfbpsn2txwjl0g865xn1aq6n9yp6ai1axj7ev0ac51wu6h5za1zvwb8l9ogevaapfox4jlol1wfb6vc0u0xbrcarns1esj07n7svojxpz0qo19y855iyi01u92pyxb63y7uvucrgcdme3disbv0piax1p2ki7mtyw91u0m3gfcuqvqwatug9m1lxb9eayt5swzbgxw2bod4dt4x5t1rgtvfaj19nql8pclekv2q2v61asidlfq6c9o9r4fn0r7z3aibtss24g8jkt5dvt96y6llidasai1v2wyj6kudpo7vyt584wjpgqwpu56ocm4w3s2fcej48elyyqve384n8r2xr2xlvpw6ajnupvtdk5rv2sbh63phchec == \6\7\m\m\s\m\v\c\7\l\w\n\k\0\v\p\i\4\7\n\8\1\e\b\6\4\m\8\p\h\3\m\5\8\3\3\7\v\7\s\z\1\9\v\0\r\k\7\b\t\l\r\p\c\7\9\l\g\6\p\7\2\c\f\m\g\y\h\6\6\x\j\5\a\v\w\c\k\n\j\8\d\w\6\i\l\x\s\h\f\s\7\u\1\e\j\5\f\6\9\d\b\f\2\h\r\0\h\j\1\l\d\2\a\c\h\u\d\6\n\h\l\j\w\s\y\a\t\w\d\c\t\x\o\o\e\9\h\f\b\p\s\n\2\t\x\w\j\l\0\g\8\6\5\x\n\1\a\q\6\n\9\y\p\6\a\i\1\a\x\j\7\e\v\0\a\c\5\1\w\u\6\h\5\z\a\1\z\v\w\b\8\l\9\o\g\e\v\a\a\p\f\o\x\4\j\l\o\l\1\w\f\b\6\v\c\0\u\0\x\b\r\c\a\r\n\s\1\e\s\j\0\7\n\7\s\v\o\j\x\p\z\0\q\o\1\9\y\8\5\5\i\y\i\0\1\u\9\2\p\y\x\b\6\3\y\7\u\v\u\c\r\g\c\d\m\e\3\d\i\s\b\v\0\p\i\a\x\1\p\2\k\i\7\m\t\y\w\9\1\u\0\m\3\g\f\c\u\q\v\q\w\a\t\u\g\9\m\1\l\x\b\9\e\a\y\t\5\s\w\z\b\g\x\w\2\b\o\d\4\d\t\4\x\5\t\1\r\g\t\v\f\a\j\1\9\n\q\l\8\p\c\l\e\k\v\2\q\2\v\6\1\a\s\i\d\l\f\q\6\c\9\o\9\r\4\f\n\0\r\7\z\3\a\i\b\t\s\s\2\4\g\8\j\k\t\5\d\v\t\9\6\y\6\l\l\i\d\a\s\a\i\1\v\2\w\y\j\6\k\u\d\p\o\7\v\y\t\5\8\4\w\j\p\g\q\w\p\u\5\6\o\c\m\4\w\3\s\2\f\c\e\j\4\8\e\l\y\y\q\v\e\3\8\4\n\8\r\2\x\r\2\x\l\v\p\w\6\a\j\n\u\p\v\t\d\k\5\r\v\2\s\b\h\6\3\p\h\c\h\e\c ]] 00:06:07.906 09:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:07.906 09:28:53 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:07.906 [2024-11-05 09:28:53.720492] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:07.906 [2024-11-05 09:28:53.720724] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60242 ] 00:06:07.906 [2024-11-05 09:28:53.859232] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.165 [2024-11-05 09:28:53.890393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.165 [2024-11-05 09:28:53.917439] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.165  [2024-11-05T09:28:54.123Z] Copying: 512/512 [B] (average 500 kBps) 00:06:08.165 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 67mmsmvc7lwnk0vpi47n81eb64m8ph3m58337v7sz19v0rk7btlrpc79lg6p72cfmgyh66xj5avwcknj8dw6ilxshfs7u1ej5f69dbf2hr0hj1ld2achud6nhljwsyatwdctxooe9hfbpsn2txwjl0g865xn1aq6n9yp6ai1axj7ev0ac51wu6h5za1zvwb8l9ogevaapfox4jlol1wfb6vc0u0xbrcarns1esj07n7svojxpz0qo19y855iyi01u92pyxb63y7uvucrgcdme3disbv0piax1p2ki7mtyw91u0m3gfcuqvqwatug9m1lxb9eayt5swzbgxw2bod4dt4x5t1rgtvfaj19nql8pclekv2q2v61asidlfq6c9o9r4fn0r7z3aibtss24g8jkt5dvt96y6llidasai1v2wyj6kudpo7vyt584wjpgqwpu56ocm4w3s2fcej48elyyqve384n8r2xr2xlvpw6ajnupvtdk5rv2sbh63phchec == \6\7\m\m\s\m\v\c\7\l\w\n\k\0\v\p\i\4\7\n\8\1\e\b\6\4\m\8\p\h\3\m\5\8\3\3\7\v\7\s\z\1\9\v\0\r\k\7\b\t\l\r\p\c\7\9\l\g\6\p\7\2\c\f\m\g\y\h\6\6\x\j\5\a\v\w\c\k\n\j\8\d\w\6\i\l\x\s\h\f\s\7\u\1\e\j\5\f\6\9\d\b\f\2\h\r\0\h\j\1\l\d\2\a\c\h\u\d\6\n\h\l\j\w\s\y\a\t\w\d\c\t\x\o\o\e\9\h\f\b\p\s\n\2\t\x\w\j\l\0\g\8\6\5\x\n\1\a\q\6\n\9\y\p\6\a\i\1\a\x\j\7\e\v\0\a\c\5\1\w\u\6\h\5\z\a\1\z\v\w\b\8\l\9\o\g\e\v\a\a\p\f\o\x\4\j\l\o\l\1\w\f\b\6\v\c\0\u\0\x\b\r\c\a\r\n\s\1\e\s\j\0\7\n\7\s\v\o\j\x\p\z\0\q\o\1\9\y\8\5\5\i\y\i\0\1\u\9\2\p\y\x\b\6\3\y\7\u\v\u\c\r\g\c\d\m\e\3\d\i\s\b\v\0\p\i\a\x\1\p\2\k\i\7\m\t\y\w\9\1\u\0\m\3\g\f\c\u\q\v\q\w\a\t\u\g\9\m\1\l\x\b\9\e\a\y\t\5\s\w\z\b\g\x\w\2\b\o\d\4\d\t\4\x\5\t\1\r\g\t\v\f\a\j\1\9\n\q\l\8\p\c\l\e\k\v\2\q\2\v\6\1\a\s\i\d\l\f\q\6\c\9\o\9\r\4\f\n\0\r\7\z\3\a\i\b\t\s\s\2\4\g\8\j\k\t\5\d\v\t\9\6\y\6\l\l\i\d\a\s\a\i\1\v\2\w\y\j\6\k\u\d\p\o\7\v\y\t\5\8\4\w\j\p\g\q\w\p\u\5\6\o\c\m\4\w\3\s\2\f\c\e\j\4\8\e\l\y\y\q\v\e\3\8\4\n\8\r\2\x\r\2\x\l\v\p\w\6\a\j\n\u\p\v\t\d\k\5\r\v\2\s\b\h\6\3\p\h\c\h\e\c ]] 00:06:08.165 00:06:08.165 real 0m3.099s 00:06:08.165 user 0m1.541s 00:06:08.165 sys 0m1.318s 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.165 ************************************ 00:06:08.165 END TEST dd_flags_misc 00:06:08.165 ************************************ 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:06:08.165 * Second test run, disabling liburing, forcing AIO 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.165 ************************************ 00:06:08.165 START TEST dd_flag_append_forced_aio 00:06:08.165 ************************************ 00:06:08.165 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1127 -- # append 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=tb4kn8w7f1uuxtmtk51ofga2lsshfr5u 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=mkn8y484atp45l47jsgubvdkk5zvte9u 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s tb4kn8w7f1uuxtmtk51ofga2lsshfr5u 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s mkn8y484atp45l47jsgubvdkk5zvte9u 00:06:08.166 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:06:08.425 [2024-11-05 09:28:54.161786] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:08.425 [2024-11-05 09:28:54.162035] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60265 ] 00:06:08.425 [2024-11-05 09:28:54.306418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.425 [2024-11-05 09:28:54.337675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.425 [2024-11-05 09:28:54.365731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.425  [2024-11-05T09:28:54.642Z] Copying: 32/32 [B] (average 31 kBps) 00:06:08.684 00:06:08.684 ************************************ 00:06:08.684 END TEST dd_flag_append_forced_aio 00:06:08.684 ************************************ 00:06:08.684 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ mkn8y484atp45l47jsgubvdkk5zvte9utb4kn8w7f1uuxtmtk51ofga2lsshfr5u == \m\k\n\8\y\4\8\4\a\t\p\4\5\l\4\7\j\s\g\u\b\v\d\k\k\5\z\v\t\e\9\u\t\b\4\k\n\8\w\7\f\1\u\u\x\t\m\t\k\5\1\o\f\g\a\2\l\s\s\h\f\r\5\u ]] 00:06:08.684 00:06:08.684 real 0m0.422s 00:06:08.684 user 0m0.209s 00:06:08.684 sys 0m0.089s 00:06:08.684 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:08.685 ************************************ 00:06:08.685 START TEST dd_flag_directory_forced_aio 00:06:08.685 ************************************ 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1127 -- # directory 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:08.685 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:08.685 [2024-11-05 09:28:54.622728] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:08.685 [2024-11-05 09:28:54.622808] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60297 ] 00:06:08.944 [2024-11-05 09:28:54.760119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.944 [2024-11-05 09:28:54.787310] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.944 [2024-11-05 09:28:54.813264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:08.944 [2024-11-05 09:28:54.829912] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:08.944 [2024-11-05 09:28:54.829962] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:08.944 [2024-11-05 09:28:54.829993] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:08.944 [2024-11-05 09:28:54.887471] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.203 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.204 09:28:54 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:06:09.204 [2024-11-05 09:28:54.989404] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:09.204 [2024-11-05 09:28:54.989635] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60303 ] 00:06:09.204 [2024-11-05 09:28:55.124118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.204 [2024-11-05 09:28:55.154250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.463 [2024-11-05 09:28:55.182254] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.463 [2024-11-05 09:28:55.198660] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.463 [2024-11-05 09:28:55.198709] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:06:09.463 [2024-11-05 09:28:55.198742] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.463 [2024-11-05 09:28:55.255478] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.463 ************************************ 00:06:09.463 END TEST dd_flag_directory_forced_aio 00:06:09.463 ************************************ 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.463 00:06:09.463 real 0m0.736s 00:06:09.463 user 0m0.378s 00:06:09.463 sys 0m0.151s 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:09.463 ************************************ 00:06:09.463 START TEST dd_flag_nofollow_forced_aio 00:06:09.463 ************************************ 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1127 -- # nofollow 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.463 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:09.723 [2024-11-05 09:28:55.427135] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:09.723 [2024-11-05 09:28:55.427233] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60332 ] 00:06:09.723 [2024-11-05 09:28:55.573920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.723 [2024-11-05 09:28:55.600543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.723 [2024-11-05 09:28:55.629166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:09.723 [2024-11-05 09:28:55.646337] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:09.723 [2024-11-05 09:28:55.646386] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:06:09.723 [2024-11-05 09:28:55.646419] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:09.981 [2024-11-05 09:28:55.703897] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:09.981 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:09.981 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.981 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:09.981 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:09.981 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:09.981 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:09.982 09:28:55 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:06:09.982 [2024-11-05 09:28:55.793614] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:09.982 [2024-11-05 09:28:55.793725] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60341 ] 00:06:09.982 [2024-11-05 09:28:55.928004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.240 [2024-11-05 09:28:55.956363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.240 [2024-11-05 09:28:55.984835] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.240 [2024-11-05 09:28:56.003195] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:10.240 [2024-11-05 09:28:56.003244] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:06:10.240 [2024-11-05 09:28:56.003293] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:10.240 [2024-11-05 09:28:56.060818] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:10.240 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.240 [2024-11-05 09:28:56.186213] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:10.240 [2024-11-05 09:28:56.186475] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60343 ] 00:06:10.499 [2024-11-05 09:28:56.333332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.499 [2024-11-05 09:28:56.360856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.499 [2024-11-05 09:28:56.389276] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.499  [2024-11-05T09:28:56.716Z] Copying: 512/512 [B] (average 500 kBps) 00:06:10.758 00:06:10.758 ************************************ 00:06:10.758 END TEST dd_flag_nofollow_forced_aio 00:06:10.758 ************************************ 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 3eyxiyopbqg4xw2mun3mmcpn90uutcms92lkaa1frbly8shl2xac8epc90xtjegc9uzny280ibbvrzwezlokjo00cymca1jhacacfemto6rnbu0nlb13667b7qfza9uqgcq1jqym0ivby5f33a2q3xnozhexarak94888rlot9o80sorbz1mm4qmd3d2xt54xis97yrb6c5rure0o1fmar36210jhenfw569nuv5ibnr9vzp62glg2helrfdar3rsigh0met209c8pc82fh1gck39o2oq1keqtvzimxp79zba4if24shcivv0z1rezl6jtysvtzqrvhoumtzd3qw6yyzmjbke6pzlh1k5m3vifz0vmjkzcc9ni1yqx9v4t71svaa478xv87u0ahaiqvq5klntwc0covhhtzr76nxrjy4vw2wcpp9bz4wf4qfkfa6i8y5yt6yzuz8h6bugqqca5l3o0giox5eps9peqwl8h6r24s2oxydrwpmm7p5q02i == \3\e\y\x\i\y\o\p\b\q\g\4\x\w\2\m\u\n\3\m\m\c\p\n\9\0\u\u\t\c\m\s\9\2\l\k\a\a\1\f\r\b\l\y\8\s\h\l\2\x\a\c\8\e\p\c\9\0\x\t\j\e\g\c\9\u\z\n\y\2\8\0\i\b\b\v\r\z\w\e\z\l\o\k\j\o\0\0\c\y\m\c\a\1\j\h\a\c\a\c\f\e\m\t\o\6\r\n\b\u\0\n\l\b\1\3\6\6\7\b\7\q\f\z\a\9\u\q\g\c\q\1\j\q\y\m\0\i\v\b\y\5\f\3\3\a\2\q\3\x\n\o\z\h\e\x\a\r\a\k\9\4\8\8\8\r\l\o\t\9\o\8\0\s\o\r\b\z\1\m\m\4\q\m\d\3\d\2\x\t\5\4\x\i\s\9\7\y\r\b\6\c\5\r\u\r\e\0\o\1\f\m\a\r\3\6\2\1\0\j\h\e\n\f\w\5\6\9\n\u\v\5\i\b\n\r\9\v\z\p\6\2\g\l\g\2\h\e\l\r\f\d\a\r\3\r\s\i\g\h\0\m\e\t\2\0\9\c\8\p\c\8\2\f\h\1\g\c\k\3\9\o\2\o\q\1\k\e\q\t\v\z\i\m\x\p\7\9\z\b\a\4\i\f\2\4\s\h\c\i\v\v\0\z\1\r\e\z\l\6\j\t\y\s\v\t\z\q\r\v\h\o\u\m\t\z\d\3\q\w\6\y\y\z\m\j\b\k\e\6\p\z\l\h\1\k\5\m\3\v\i\f\z\0\v\m\j\k\z\c\c\9\n\i\1\y\q\x\9\v\4\t\7\1\s\v\a\a\4\7\8\x\v\8\7\u\0\a\h\a\i\q\v\q\5\k\l\n\t\w\c\0\c\o\v\h\h\t\z\r\7\6\n\x\r\j\y\4\v\w\2\w\c\p\p\9\b\z\4\w\f\4\q\f\k\f\a\6\i\8\y\5\y\t\6\y\z\u\z\8\h\6\b\u\g\q\q\c\a\5\l\3\o\0\g\i\o\x\5\e\p\s\9\p\e\q\w\l\8\h\6\r\2\4\s\2\o\x\y\d\r\w\p\m\m\7\p\5\q\0\2\i ]] 00:06:10.758 00:06:10.758 real 0m1.174s 00:06:10.758 user 0m0.575s 00:06:10.758 sys 0m0.262s 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:10.758 ************************************ 00:06:10.758 START TEST dd_flag_noatime_forced_aio 00:06:10.758 ************************************ 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1127 -- # noatime 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:06:10.758 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1730798936 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1730798936 00:06:10.759 09:28:56 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:06:11.695 09:28:57 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:11.954 [2024-11-05 09:28:57.669564] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:11.954 [2024-11-05 09:28:57.669884] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60389 ] 00:06:11.954 [2024-11-05 09:28:57.821916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.954 [2024-11-05 09:28:57.860057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.954 [2024-11-05 09:28:57.892394] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.214  [2024-11-05T09:28:58.172Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.214 00:06:12.214 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:12.214 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1730798936 )) 00:06:12.214 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.214 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1730798936 )) 00:06:12.214 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:12.214 [2024-11-05 09:28:58.123987] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:12.214 [2024-11-05 09:28:58.124230] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60395 ] 00:06:12.473 [2024-11-05 09:28:58.268611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.473 [2024-11-05 09:28:58.296439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.473 [2024-11-05 09:28:58.323074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.473  [2024-11-05T09:28:58.690Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.732 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:12.733 ************************************ 00:06:12.733 END TEST dd_flag_noatime_forced_aio 00:06:12.733 ************************************ 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1730798938 )) 00:06:12.733 00:06:12.733 real 0m1.891s 00:06:12.733 user 0m0.451s 00:06:12.733 sys 0m0.193s 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:12.733 ************************************ 00:06:12.733 START TEST dd_flags_misc_forced_aio 00:06:12.733 ************************************ 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1127 -- # io 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.733 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:12.733 [2024-11-05 09:28:58.587779] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:12.733 [2024-11-05 09:28:58.587868] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60422 ] 00:06:12.992 [2024-11-05 09:28:58.723670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.992 [2024-11-05 09:28:58.750772] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.992 [2024-11-05 09:28:58.776825] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:12.992  [2024-11-05T09:28:58.950Z] Copying: 512/512 [B] (average 500 kBps) 00:06:12.992 00:06:12.992 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ m5cijt63jg9e9mcsujv5yoqkaobe7ezurplfl76nz3484g63qfc17n24jtc2asuwhd66kefii0szrr18rv1wd3p99o1wihjzeo2y6lxsz7f0ds4hlxmqji5lbje49lfm9pasrf3zo4mujq1zd9zjxevp0r66vmzshwp296uxf8b5uxmrqd59xayteb0ij9n3017aei44sm4iqeup6mfrr5x3772sm083p3tmbtqdmaog3viczppsx6j3bsa1ighrz6ku0uflqait1vfsy8pwk9ngsefl7f3jq9d3eq6da2j0d6ty0bz15p9lmpxgd2wnkpowwcx8o81xji5d9nqwxb54amh0l3c6bkpf6dtbvf3vfzs6mzh4uy7x2wrx3axc0d92wjvc9issbm9sx4ozkql3wp067k89ed2edy5pfdy2rgks69hyyg42ose83eqjaps46f9p45rs34jdrwgf6chh432qi0ib7n0ipfusvktjgyb1py82ip4zod2tytgs == \m\5\c\i\j\t\6\3\j\g\9\e\9\m\c\s\u\j\v\5\y\o\q\k\a\o\b\e\7\e\z\u\r\p\l\f\l\7\6\n\z\3\4\8\4\g\6\3\q\f\c\1\7\n\2\4\j\t\c\2\a\s\u\w\h\d\6\6\k\e\f\i\i\0\s\z\r\r\1\8\r\v\1\w\d\3\p\9\9\o\1\w\i\h\j\z\e\o\2\y\6\l\x\s\z\7\f\0\d\s\4\h\l\x\m\q\j\i\5\l\b\j\e\4\9\l\f\m\9\p\a\s\r\f\3\z\o\4\m\u\j\q\1\z\d\9\z\j\x\e\v\p\0\r\6\6\v\m\z\s\h\w\p\2\9\6\u\x\f\8\b\5\u\x\m\r\q\d\5\9\x\a\y\t\e\b\0\i\j\9\n\3\0\1\7\a\e\i\4\4\s\m\4\i\q\e\u\p\6\m\f\r\r\5\x\3\7\7\2\s\m\0\8\3\p\3\t\m\b\t\q\d\m\a\o\g\3\v\i\c\z\p\p\s\x\6\j\3\b\s\a\1\i\g\h\r\z\6\k\u\0\u\f\l\q\a\i\t\1\v\f\s\y\8\p\w\k\9\n\g\s\e\f\l\7\f\3\j\q\9\d\3\e\q\6\d\a\2\j\0\d\6\t\y\0\b\z\1\5\p\9\l\m\p\x\g\d\2\w\n\k\p\o\w\w\c\x\8\o\8\1\x\j\i\5\d\9\n\q\w\x\b\5\4\a\m\h\0\l\3\c\6\b\k\p\f\6\d\t\b\v\f\3\v\f\z\s\6\m\z\h\4\u\y\7\x\2\w\r\x\3\a\x\c\0\d\9\2\w\j\v\c\9\i\s\s\b\m\9\s\x\4\o\z\k\q\l\3\w\p\0\6\7\k\8\9\e\d\2\e\d\y\5\p\f\d\y\2\r\g\k\s\6\9\h\y\y\g\4\2\o\s\e\8\3\e\q\j\a\p\s\4\6\f\9\p\4\5\r\s\3\4\j\d\r\w\g\f\6\c\h\h\4\3\2\q\i\0\i\b\7\n\0\i\p\f\u\s\v\k\t\j\g\y\b\1\p\y\8\2\i\p\4\z\o\d\2\t\y\t\g\s ]] 00:06:12.992 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:12.992 09:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:13.251 [2024-11-05 09:28:58.979156] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:13.251 [2024-11-05 09:28:58.979256] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60429 ] 00:06:13.251 [2024-11-05 09:28:59.123290] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.251 [2024-11-05 09:28:59.151069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.251 [2024-11-05 09:28:59.177026] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.251  [2024-11-05T09:28:59.469Z] Copying: 512/512 [B] (average 500 kBps) 00:06:13.511 00:06:13.511 09:28:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ m5cijt63jg9e9mcsujv5yoqkaobe7ezurplfl76nz3484g63qfc17n24jtc2asuwhd66kefii0szrr18rv1wd3p99o1wihjzeo2y6lxsz7f0ds4hlxmqji5lbje49lfm9pasrf3zo4mujq1zd9zjxevp0r66vmzshwp296uxf8b5uxmrqd59xayteb0ij9n3017aei44sm4iqeup6mfrr5x3772sm083p3tmbtqdmaog3viczppsx6j3bsa1ighrz6ku0uflqait1vfsy8pwk9ngsefl7f3jq9d3eq6da2j0d6ty0bz15p9lmpxgd2wnkpowwcx8o81xji5d9nqwxb54amh0l3c6bkpf6dtbvf3vfzs6mzh4uy7x2wrx3axc0d92wjvc9issbm9sx4ozkql3wp067k89ed2edy5pfdy2rgks69hyyg42ose83eqjaps46f9p45rs34jdrwgf6chh432qi0ib7n0ipfusvktjgyb1py82ip4zod2tytgs == \m\5\c\i\j\t\6\3\j\g\9\e\9\m\c\s\u\j\v\5\y\o\q\k\a\o\b\e\7\e\z\u\r\p\l\f\l\7\6\n\z\3\4\8\4\g\6\3\q\f\c\1\7\n\2\4\j\t\c\2\a\s\u\w\h\d\6\6\k\e\f\i\i\0\s\z\r\r\1\8\r\v\1\w\d\3\p\9\9\o\1\w\i\h\j\z\e\o\2\y\6\l\x\s\z\7\f\0\d\s\4\h\l\x\m\q\j\i\5\l\b\j\e\4\9\l\f\m\9\p\a\s\r\f\3\z\o\4\m\u\j\q\1\z\d\9\z\j\x\e\v\p\0\r\6\6\v\m\z\s\h\w\p\2\9\6\u\x\f\8\b\5\u\x\m\r\q\d\5\9\x\a\y\t\e\b\0\i\j\9\n\3\0\1\7\a\e\i\4\4\s\m\4\i\q\e\u\p\6\m\f\r\r\5\x\3\7\7\2\s\m\0\8\3\p\3\t\m\b\t\q\d\m\a\o\g\3\v\i\c\z\p\p\s\x\6\j\3\b\s\a\1\i\g\h\r\z\6\k\u\0\u\f\l\q\a\i\t\1\v\f\s\y\8\p\w\k\9\n\g\s\e\f\l\7\f\3\j\q\9\d\3\e\q\6\d\a\2\j\0\d\6\t\y\0\b\z\1\5\p\9\l\m\p\x\g\d\2\w\n\k\p\o\w\w\c\x\8\o\8\1\x\j\i\5\d\9\n\q\w\x\b\5\4\a\m\h\0\l\3\c\6\b\k\p\f\6\d\t\b\v\f\3\v\f\z\s\6\m\z\h\4\u\y\7\x\2\w\r\x\3\a\x\c\0\d\9\2\w\j\v\c\9\i\s\s\b\m\9\s\x\4\o\z\k\q\l\3\w\p\0\6\7\k\8\9\e\d\2\e\d\y\5\p\f\d\y\2\r\g\k\s\6\9\h\y\y\g\4\2\o\s\e\8\3\e\q\j\a\p\s\4\6\f\9\p\4\5\r\s\3\4\j\d\r\w\g\f\6\c\h\h\4\3\2\q\i\0\i\b\7\n\0\i\p\f\u\s\v\k\t\j\g\y\b\1\p\y\8\2\i\p\4\z\o\d\2\t\y\t\g\s ]] 00:06:13.511 09:28:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.511 09:28:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:13.511 [2024-11-05 09:28:59.361895] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:13.511 [2024-11-05 09:28:59.361984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60431 ] 00:06:13.771 [2024-11-05 09:28:59.497604] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.771 [2024-11-05 09:28:59.524507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.771 [2024-11-05 09:28:59.550553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:13.771  [2024-11-05T09:28:59.729Z] Copying: 512/512 [B] (average 250 kBps) 00:06:13.771 00:06:13.771 09:28:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ m5cijt63jg9e9mcsujv5yoqkaobe7ezurplfl76nz3484g63qfc17n24jtc2asuwhd66kefii0szrr18rv1wd3p99o1wihjzeo2y6lxsz7f0ds4hlxmqji5lbje49lfm9pasrf3zo4mujq1zd9zjxevp0r66vmzshwp296uxf8b5uxmrqd59xayteb0ij9n3017aei44sm4iqeup6mfrr5x3772sm083p3tmbtqdmaog3viczppsx6j3bsa1ighrz6ku0uflqait1vfsy8pwk9ngsefl7f3jq9d3eq6da2j0d6ty0bz15p9lmpxgd2wnkpowwcx8o81xji5d9nqwxb54amh0l3c6bkpf6dtbvf3vfzs6mzh4uy7x2wrx3axc0d92wjvc9issbm9sx4ozkql3wp067k89ed2edy5pfdy2rgks69hyyg42ose83eqjaps46f9p45rs34jdrwgf6chh432qi0ib7n0ipfusvktjgyb1py82ip4zod2tytgs == \m\5\c\i\j\t\6\3\j\g\9\e\9\m\c\s\u\j\v\5\y\o\q\k\a\o\b\e\7\e\z\u\r\p\l\f\l\7\6\n\z\3\4\8\4\g\6\3\q\f\c\1\7\n\2\4\j\t\c\2\a\s\u\w\h\d\6\6\k\e\f\i\i\0\s\z\r\r\1\8\r\v\1\w\d\3\p\9\9\o\1\w\i\h\j\z\e\o\2\y\6\l\x\s\z\7\f\0\d\s\4\h\l\x\m\q\j\i\5\l\b\j\e\4\9\l\f\m\9\p\a\s\r\f\3\z\o\4\m\u\j\q\1\z\d\9\z\j\x\e\v\p\0\r\6\6\v\m\z\s\h\w\p\2\9\6\u\x\f\8\b\5\u\x\m\r\q\d\5\9\x\a\y\t\e\b\0\i\j\9\n\3\0\1\7\a\e\i\4\4\s\m\4\i\q\e\u\p\6\m\f\r\r\5\x\3\7\7\2\s\m\0\8\3\p\3\t\m\b\t\q\d\m\a\o\g\3\v\i\c\z\p\p\s\x\6\j\3\b\s\a\1\i\g\h\r\z\6\k\u\0\u\f\l\q\a\i\t\1\v\f\s\y\8\p\w\k\9\n\g\s\e\f\l\7\f\3\j\q\9\d\3\e\q\6\d\a\2\j\0\d\6\t\y\0\b\z\1\5\p\9\l\m\p\x\g\d\2\w\n\k\p\o\w\w\c\x\8\o\8\1\x\j\i\5\d\9\n\q\w\x\b\5\4\a\m\h\0\l\3\c\6\b\k\p\f\6\d\t\b\v\f\3\v\f\z\s\6\m\z\h\4\u\y\7\x\2\w\r\x\3\a\x\c\0\d\9\2\w\j\v\c\9\i\s\s\b\m\9\s\x\4\o\z\k\q\l\3\w\p\0\6\7\k\8\9\e\d\2\e\d\y\5\p\f\d\y\2\r\g\k\s\6\9\h\y\y\g\4\2\o\s\e\8\3\e\q\j\a\p\s\4\6\f\9\p\4\5\r\s\3\4\j\d\r\w\g\f\6\c\h\h\4\3\2\q\i\0\i\b\7\n\0\i\p\f\u\s\v\k\t\j\g\y\b\1\p\y\8\2\i\p\4\z\o\d\2\t\y\t\g\s ]] 00:06:13.771 09:28:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:13.771 09:28:59 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:14.030 [2024-11-05 09:28:59.745093] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:14.030 [2024-11-05 09:28:59.745349] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60444 ] 00:06:14.030 [2024-11-05 09:28:59.883926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.030 [2024-11-05 09:28:59.910816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.030 [2024-11-05 09:28:59.936951] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.030  [2024-11-05T09:29:00.246Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.288 00:06:14.288 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ m5cijt63jg9e9mcsujv5yoqkaobe7ezurplfl76nz3484g63qfc17n24jtc2asuwhd66kefii0szrr18rv1wd3p99o1wihjzeo2y6lxsz7f0ds4hlxmqji5lbje49lfm9pasrf3zo4mujq1zd9zjxevp0r66vmzshwp296uxf8b5uxmrqd59xayteb0ij9n3017aei44sm4iqeup6mfrr5x3772sm083p3tmbtqdmaog3viczppsx6j3bsa1ighrz6ku0uflqait1vfsy8pwk9ngsefl7f3jq9d3eq6da2j0d6ty0bz15p9lmpxgd2wnkpowwcx8o81xji5d9nqwxb54amh0l3c6bkpf6dtbvf3vfzs6mzh4uy7x2wrx3axc0d92wjvc9issbm9sx4ozkql3wp067k89ed2edy5pfdy2rgks69hyyg42ose83eqjaps46f9p45rs34jdrwgf6chh432qi0ib7n0ipfusvktjgyb1py82ip4zod2tytgs == \m\5\c\i\j\t\6\3\j\g\9\e\9\m\c\s\u\j\v\5\y\o\q\k\a\o\b\e\7\e\z\u\r\p\l\f\l\7\6\n\z\3\4\8\4\g\6\3\q\f\c\1\7\n\2\4\j\t\c\2\a\s\u\w\h\d\6\6\k\e\f\i\i\0\s\z\r\r\1\8\r\v\1\w\d\3\p\9\9\o\1\w\i\h\j\z\e\o\2\y\6\l\x\s\z\7\f\0\d\s\4\h\l\x\m\q\j\i\5\l\b\j\e\4\9\l\f\m\9\p\a\s\r\f\3\z\o\4\m\u\j\q\1\z\d\9\z\j\x\e\v\p\0\r\6\6\v\m\z\s\h\w\p\2\9\6\u\x\f\8\b\5\u\x\m\r\q\d\5\9\x\a\y\t\e\b\0\i\j\9\n\3\0\1\7\a\e\i\4\4\s\m\4\i\q\e\u\p\6\m\f\r\r\5\x\3\7\7\2\s\m\0\8\3\p\3\t\m\b\t\q\d\m\a\o\g\3\v\i\c\z\p\p\s\x\6\j\3\b\s\a\1\i\g\h\r\z\6\k\u\0\u\f\l\q\a\i\t\1\v\f\s\y\8\p\w\k\9\n\g\s\e\f\l\7\f\3\j\q\9\d\3\e\q\6\d\a\2\j\0\d\6\t\y\0\b\z\1\5\p\9\l\m\p\x\g\d\2\w\n\k\p\o\w\w\c\x\8\o\8\1\x\j\i\5\d\9\n\q\w\x\b\5\4\a\m\h\0\l\3\c\6\b\k\p\f\6\d\t\b\v\f\3\v\f\z\s\6\m\z\h\4\u\y\7\x\2\w\r\x\3\a\x\c\0\d\9\2\w\j\v\c\9\i\s\s\b\m\9\s\x\4\o\z\k\q\l\3\w\p\0\6\7\k\8\9\e\d\2\e\d\y\5\p\f\d\y\2\r\g\k\s\6\9\h\y\y\g\4\2\o\s\e\8\3\e\q\j\a\p\s\4\6\f\9\p\4\5\r\s\3\4\j\d\r\w\g\f\6\c\h\h\4\3\2\q\i\0\i\b\7\n\0\i\p\f\u\s\v\k\t\j\g\y\b\1\p\y\8\2\i\p\4\z\o\d\2\t\y\t\g\s ]] 00:06:14.288 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:06:14.288 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:06:14.288 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:06:14.288 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:14.288 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.288 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:06:14.288 [2024-11-05 09:29:00.156890] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:14.288 [2024-11-05 09:29:00.156968] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60446 ] 00:06:14.546 [2024-11-05 09:29:00.305364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.546 [2024-11-05 09:29:00.336317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.546 [2024-11-05 09:29:00.362950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.546  [2024-11-05T09:29:00.763Z] Copying: 512/512 [B] (average 500 kBps) 00:06:14.806 00:06:14.806 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 95hivcr36er3q28s1gt987zzrvtjqddujrecol06plvnev334pdny7b4cxqexyocy4m3w871lhzs6ji40eqe5xdusl2av3bk28ijaot3p9vquq9s4ata1rvgt1fyju9rtxegu6qdp056u3vs1bupwrhckuztub8djjjxtb04nketvgwjgzvanjlrccdysf6g7inck19ulvfhi4xer52ad0q5lumv55m521yia7v24gvnd0jh1zwik806jb9bjahzhvsxnrul28jd2l9kmdnlaw60at1hpqkh3aaukf3bj2upzai887a5r37v0arrly5bl6yo2dbmux2spsv2n5r15n2v90w9mmer03hog2fnal3vkck6l3weusvlxrftpwp49yg9e0cv1yt0mh8edxamwre6z0i5ltce7qzdchznklsybvm89yal2qwuobt3fsfzej5sv0i5pzocv1iggfxxltod350bbjgyyz6qsymst9pxlb8ilnoday3xolte723f == \9\5\h\i\v\c\r\3\6\e\r\3\q\2\8\s\1\g\t\9\8\7\z\z\r\v\t\j\q\d\d\u\j\r\e\c\o\l\0\6\p\l\v\n\e\v\3\3\4\p\d\n\y\7\b\4\c\x\q\e\x\y\o\c\y\4\m\3\w\8\7\1\l\h\z\s\6\j\i\4\0\e\q\e\5\x\d\u\s\l\2\a\v\3\b\k\2\8\i\j\a\o\t\3\p\9\v\q\u\q\9\s\4\a\t\a\1\r\v\g\t\1\f\y\j\u\9\r\t\x\e\g\u\6\q\d\p\0\5\6\u\3\v\s\1\b\u\p\w\r\h\c\k\u\z\t\u\b\8\d\j\j\j\x\t\b\0\4\n\k\e\t\v\g\w\j\g\z\v\a\n\j\l\r\c\c\d\y\s\f\6\g\7\i\n\c\k\1\9\u\l\v\f\h\i\4\x\e\r\5\2\a\d\0\q\5\l\u\m\v\5\5\m\5\2\1\y\i\a\7\v\2\4\g\v\n\d\0\j\h\1\z\w\i\k\8\0\6\j\b\9\b\j\a\h\z\h\v\s\x\n\r\u\l\2\8\j\d\2\l\9\k\m\d\n\l\a\w\6\0\a\t\1\h\p\q\k\h\3\a\a\u\k\f\3\b\j\2\u\p\z\a\i\8\8\7\a\5\r\3\7\v\0\a\r\r\l\y\5\b\l\6\y\o\2\d\b\m\u\x\2\s\p\s\v\2\n\5\r\1\5\n\2\v\9\0\w\9\m\m\e\r\0\3\h\o\g\2\f\n\a\l\3\v\k\c\k\6\l\3\w\e\u\s\v\l\x\r\f\t\p\w\p\4\9\y\g\9\e\0\c\v\1\y\t\0\m\h\8\e\d\x\a\m\w\r\e\6\z\0\i\5\l\t\c\e\7\q\z\d\c\h\z\n\k\l\s\y\b\v\m\8\9\y\a\l\2\q\w\u\o\b\t\3\f\s\f\z\e\j\5\s\v\0\i\5\p\z\o\c\v\1\i\g\g\f\x\x\l\t\o\d\3\5\0\b\b\j\g\y\y\z\6\q\s\y\m\s\t\9\p\x\l\b\8\i\l\n\o\d\a\y\3\x\o\l\t\e\7\2\3\f ]] 00:06:14.806 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:14.806 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:06:14.806 [2024-11-05 09:29:00.547178] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:14.806 [2024-11-05 09:29:00.547257] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60454 ] 00:06:14.806 [2024-11-05 09:29:00.681285] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.806 [2024-11-05 09:29:00.714577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.806 [2024-11-05 09:29:00.741245] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:14.806  [2024-11-05T09:29:01.023Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.065 00:06:15.065 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 95hivcr36er3q28s1gt987zzrvtjqddujrecol06plvnev334pdny7b4cxqexyocy4m3w871lhzs6ji40eqe5xdusl2av3bk28ijaot3p9vquq9s4ata1rvgt1fyju9rtxegu6qdp056u3vs1bupwrhckuztub8djjjxtb04nketvgwjgzvanjlrccdysf6g7inck19ulvfhi4xer52ad0q5lumv55m521yia7v24gvnd0jh1zwik806jb9bjahzhvsxnrul28jd2l9kmdnlaw60at1hpqkh3aaukf3bj2upzai887a5r37v0arrly5bl6yo2dbmux2spsv2n5r15n2v90w9mmer03hog2fnal3vkck6l3weusvlxrftpwp49yg9e0cv1yt0mh8edxamwre6z0i5ltce7qzdchznklsybvm89yal2qwuobt3fsfzej5sv0i5pzocv1iggfxxltod350bbjgyyz6qsymst9pxlb8ilnoday3xolte723f == \9\5\h\i\v\c\r\3\6\e\r\3\q\2\8\s\1\g\t\9\8\7\z\z\r\v\t\j\q\d\d\u\j\r\e\c\o\l\0\6\p\l\v\n\e\v\3\3\4\p\d\n\y\7\b\4\c\x\q\e\x\y\o\c\y\4\m\3\w\8\7\1\l\h\z\s\6\j\i\4\0\e\q\e\5\x\d\u\s\l\2\a\v\3\b\k\2\8\i\j\a\o\t\3\p\9\v\q\u\q\9\s\4\a\t\a\1\r\v\g\t\1\f\y\j\u\9\r\t\x\e\g\u\6\q\d\p\0\5\6\u\3\v\s\1\b\u\p\w\r\h\c\k\u\z\t\u\b\8\d\j\j\j\x\t\b\0\4\n\k\e\t\v\g\w\j\g\z\v\a\n\j\l\r\c\c\d\y\s\f\6\g\7\i\n\c\k\1\9\u\l\v\f\h\i\4\x\e\r\5\2\a\d\0\q\5\l\u\m\v\5\5\m\5\2\1\y\i\a\7\v\2\4\g\v\n\d\0\j\h\1\z\w\i\k\8\0\6\j\b\9\b\j\a\h\z\h\v\s\x\n\r\u\l\2\8\j\d\2\l\9\k\m\d\n\l\a\w\6\0\a\t\1\h\p\q\k\h\3\a\a\u\k\f\3\b\j\2\u\p\z\a\i\8\8\7\a\5\r\3\7\v\0\a\r\r\l\y\5\b\l\6\y\o\2\d\b\m\u\x\2\s\p\s\v\2\n\5\r\1\5\n\2\v\9\0\w\9\m\m\e\r\0\3\h\o\g\2\f\n\a\l\3\v\k\c\k\6\l\3\w\e\u\s\v\l\x\r\f\t\p\w\p\4\9\y\g\9\e\0\c\v\1\y\t\0\m\h\8\e\d\x\a\m\w\r\e\6\z\0\i\5\l\t\c\e\7\q\z\d\c\h\z\n\k\l\s\y\b\v\m\8\9\y\a\l\2\q\w\u\o\b\t\3\f\s\f\z\e\j\5\s\v\0\i\5\p\z\o\c\v\1\i\g\g\f\x\x\l\t\o\d\3\5\0\b\b\j\g\y\y\z\6\q\s\y\m\s\t\9\p\x\l\b\8\i\l\n\o\d\a\y\3\x\o\l\t\e\7\2\3\f ]] 00:06:15.065 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.065 09:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:06:15.065 [2024-11-05 09:29:00.938897] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:15.065 [2024-11-05 09:29:00.938984] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60462 ] 00:06:15.324 [2024-11-05 09:29:01.084586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.324 [2024-11-05 09:29:01.115565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.324 [2024-11-05 09:29:01.143205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.324  [2024-11-05T09:29:01.541Z] Copying: 512/512 [B] (average 125 kBps) 00:06:15.583 00:06:15.583 09:29:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 95hivcr36er3q28s1gt987zzrvtjqddujrecol06plvnev334pdny7b4cxqexyocy4m3w871lhzs6ji40eqe5xdusl2av3bk28ijaot3p9vquq9s4ata1rvgt1fyju9rtxegu6qdp056u3vs1bupwrhckuztub8djjjxtb04nketvgwjgzvanjlrccdysf6g7inck19ulvfhi4xer52ad0q5lumv55m521yia7v24gvnd0jh1zwik806jb9bjahzhvsxnrul28jd2l9kmdnlaw60at1hpqkh3aaukf3bj2upzai887a5r37v0arrly5bl6yo2dbmux2spsv2n5r15n2v90w9mmer03hog2fnal3vkck6l3weusvlxrftpwp49yg9e0cv1yt0mh8edxamwre6z0i5ltce7qzdchznklsybvm89yal2qwuobt3fsfzej5sv0i5pzocv1iggfxxltod350bbjgyyz6qsymst9pxlb8ilnoday3xolte723f == \9\5\h\i\v\c\r\3\6\e\r\3\q\2\8\s\1\g\t\9\8\7\z\z\r\v\t\j\q\d\d\u\j\r\e\c\o\l\0\6\p\l\v\n\e\v\3\3\4\p\d\n\y\7\b\4\c\x\q\e\x\y\o\c\y\4\m\3\w\8\7\1\l\h\z\s\6\j\i\4\0\e\q\e\5\x\d\u\s\l\2\a\v\3\b\k\2\8\i\j\a\o\t\3\p\9\v\q\u\q\9\s\4\a\t\a\1\r\v\g\t\1\f\y\j\u\9\r\t\x\e\g\u\6\q\d\p\0\5\6\u\3\v\s\1\b\u\p\w\r\h\c\k\u\z\t\u\b\8\d\j\j\j\x\t\b\0\4\n\k\e\t\v\g\w\j\g\z\v\a\n\j\l\r\c\c\d\y\s\f\6\g\7\i\n\c\k\1\9\u\l\v\f\h\i\4\x\e\r\5\2\a\d\0\q\5\l\u\m\v\5\5\m\5\2\1\y\i\a\7\v\2\4\g\v\n\d\0\j\h\1\z\w\i\k\8\0\6\j\b\9\b\j\a\h\z\h\v\s\x\n\r\u\l\2\8\j\d\2\l\9\k\m\d\n\l\a\w\6\0\a\t\1\h\p\q\k\h\3\a\a\u\k\f\3\b\j\2\u\p\z\a\i\8\8\7\a\5\r\3\7\v\0\a\r\r\l\y\5\b\l\6\y\o\2\d\b\m\u\x\2\s\p\s\v\2\n\5\r\1\5\n\2\v\9\0\w\9\m\m\e\r\0\3\h\o\g\2\f\n\a\l\3\v\k\c\k\6\l\3\w\e\u\s\v\l\x\r\f\t\p\w\p\4\9\y\g\9\e\0\c\v\1\y\t\0\m\h\8\e\d\x\a\m\w\r\e\6\z\0\i\5\l\t\c\e\7\q\z\d\c\h\z\n\k\l\s\y\b\v\m\8\9\y\a\l\2\q\w\u\o\b\t\3\f\s\f\z\e\j\5\s\v\0\i\5\p\z\o\c\v\1\i\g\g\f\x\x\l\t\o\d\3\5\0\b\b\j\g\y\y\z\6\q\s\y\m\s\t\9\p\x\l\b\8\i\l\n\o\d\a\y\3\x\o\l\t\e\7\2\3\f ]] 00:06:15.583 09:29:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:06:15.583 09:29:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:06:15.583 [2024-11-05 09:29:01.348442] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:15.583 [2024-11-05 09:29:01.348530] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60464 ] 00:06:15.583 [2024-11-05 09:29:01.495446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.583 [2024-11-05 09:29:01.530798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.842 [2024-11-05 09:29:01.562575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.842  [2024-11-05T09:29:01.800Z] Copying: 512/512 [B] (average 500 kBps) 00:06:15.842 00:06:15.842 ************************************ 00:06:15.842 END TEST dd_flags_misc_forced_aio 00:06:15.842 ************************************ 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ 95hivcr36er3q28s1gt987zzrvtjqddujrecol06plvnev334pdny7b4cxqexyocy4m3w871lhzs6ji40eqe5xdusl2av3bk28ijaot3p9vquq9s4ata1rvgt1fyju9rtxegu6qdp056u3vs1bupwrhckuztub8djjjxtb04nketvgwjgzvanjlrccdysf6g7inck19ulvfhi4xer52ad0q5lumv55m521yia7v24gvnd0jh1zwik806jb9bjahzhvsxnrul28jd2l9kmdnlaw60at1hpqkh3aaukf3bj2upzai887a5r37v0arrly5bl6yo2dbmux2spsv2n5r15n2v90w9mmer03hog2fnal3vkck6l3weusvlxrftpwp49yg9e0cv1yt0mh8edxamwre6z0i5ltce7qzdchznklsybvm89yal2qwuobt3fsfzej5sv0i5pzocv1iggfxxltod350bbjgyyz6qsymst9pxlb8ilnoday3xolte723f == \9\5\h\i\v\c\r\3\6\e\r\3\q\2\8\s\1\g\t\9\8\7\z\z\r\v\t\j\q\d\d\u\j\r\e\c\o\l\0\6\p\l\v\n\e\v\3\3\4\p\d\n\y\7\b\4\c\x\q\e\x\y\o\c\y\4\m\3\w\8\7\1\l\h\z\s\6\j\i\4\0\e\q\e\5\x\d\u\s\l\2\a\v\3\b\k\2\8\i\j\a\o\t\3\p\9\v\q\u\q\9\s\4\a\t\a\1\r\v\g\t\1\f\y\j\u\9\r\t\x\e\g\u\6\q\d\p\0\5\6\u\3\v\s\1\b\u\p\w\r\h\c\k\u\z\t\u\b\8\d\j\j\j\x\t\b\0\4\n\k\e\t\v\g\w\j\g\z\v\a\n\j\l\r\c\c\d\y\s\f\6\g\7\i\n\c\k\1\9\u\l\v\f\h\i\4\x\e\r\5\2\a\d\0\q\5\l\u\m\v\5\5\m\5\2\1\y\i\a\7\v\2\4\g\v\n\d\0\j\h\1\z\w\i\k\8\0\6\j\b\9\b\j\a\h\z\h\v\s\x\n\r\u\l\2\8\j\d\2\l\9\k\m\d\n\l\a\w\6\0\a\t\1\h\p\q\k\h\3\a\a\u\k\f\3\b\j\2\u\p\z\a\i\8\8\7\a\5\r\3\7\v\0\a\r\r\l\y\5\b\l\6\y\o\2\d\b\m\u\x\2\s\p\s\v\2\n\5\r\1\5\n\2\v\9\0\w\9\m\m\e\r\0\3\h\o\g\2\f\n\a\l\3\v\k\c\k\6\l\3\w\e\u\s\v\l\x\r\f\t\p\w\p\4\9\y\g\9\e\0\c\v\1\y\t\0\m\h\8\e\d\x\a\m\w\r\e\6\z\0\i\5\l\t\c\e\7\q\z\d\c\h\z\n\k\l\s\y\b\v\m\8\9\y\a\l\2\q\w\u\o\b\t\3\f\s\f\z\e\j\5\s\v\0\i\5\p\z\o\c\v\1\i\g\g\f\x\x\l\t\o\d\3\5\0\b\b\j\g\y\y\z\6\q\s\y\m\s\t\9\p\x\l\b\8\i\l\n\o\d\a\y\3\x\o\l\t\e\7\2\3\f ]] 00:06:15.843 00:06:15.843 real 0m3.189s 00:06:15.843 user 0m1.557s 00:06:15.843 sys 0m0.669s 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:06:15.843 ************************************ 00:06:15.843 END TEST spdk_dd_posix 00:06:15.843 ************************************ 00:06:15.843 00:06:15.843 real 0m15.398s 00:06:15.843 user 0m6.585s 00:06:15.843 sys 0m4.097s 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.843 09:29:01 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:06:16.102 09:29:01 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:16.102 09:29:01 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.102 09:29:01 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.102 09:29:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:16.102 ************************************ 00:06:16.102 START TEST spdk_dd_malloc 00:06:16.102 ************************************ 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:06:16.102 * Looking for test storage... 00:06:16.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lcov --version 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.102 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:16.103 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.103 --rc genhtml_branch_coverage=1 00:06:16.103 --rc genhtml_function_coverage=1 00:06:16.103 --rc genhtml_legend=1 00:06:16.103 --rc geninfo_all_blocks=1 00:06:16.103 --rc geninfo_unexecuted_blocks=1 00:06:16.103 00:06:16.103 ' 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:16.103 09:29:01 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:16.103 ************************************ 00:06:16.103 START TEST dd_malloc_copy 00:06:16.103 ************************************ 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1127 -- # malloc_copy 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:16.103 09:29:02 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:16.362 [2024-11-05 09:29:02.073333] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:16.362 [2024-11-05 09:29:02.073615] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60546 ] 00:06:16.362 { 00:06:16.362 "subsystems": [ 00:06:16.362 { 00:06:16.362 "subsystem": "bdev", 00:06:16.362 "config": [ 00:06:16.362 { 00:06:16.362 "params": { 00:06:16.362 "block_size": 512, 00:06:16.362 "num_blocks": 1048576, 00:06:16.362 "name": "malloc0" 00:06:16.362 }, 00:06:16.362 "method": "bdev_malloc_create" 00:06:16.362 }, 00:06:16.362 { 00:06:16.362 "params": { 00:06:16.362 "block_size": 512, 00:06:16.362 "num_blocks": 1048576, 00:06:16.362 "name": "malloc1" 00:06:16.362 }, 00:06:16.362 "method": "bdev_malloc_create" 00:06:16.362 }, 00:06:16.362 { 00:06:16.362 "method": "bdev_wait_for_examine" 00:06:16.362 } 00:06:16.362 ] 00:06:16.362 } 00:06:16.362 ] 00:06:16.362 } 00:06:16.362 [2024-11-05 09:29:02.217236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.362 [2024-11-05 09:29:02.247561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.362 [2024-11-05 09:29:02.274626] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:17.740  [2024-11-05T09:29:04.636Z] Copying: 239/512 [MB] (239 MBps) [2024-11-05T09:29:04.636Z] Copying: 477/512 [MB] (237 MBps) [2024-11-05T09:29:05.204Z] Copying: 512/512 [MB] (average 238 MBps) 00:06:19.246 00:06:19.246 09:29:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:06:19.246 09:29:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:06:19.246 09:29:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:19.246 09:29:04 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:19.246 [2024-11-05 09:29:04.960177] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:19.246 [2024-11-05 09:29:04.960825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60588 ] 00:06:19.246 { 00:06:19.246 "subsystems": [ 00:06:19.246 { 00:06:19.246 "subsystem": "bdev", 00:06:19.246 "config": [ 00:06:19.246 { 00:06:19.246 "params": { 00:06:19.246 "block_size": 512, 00:06:19.246 "num_blocks": 1048576, 00:06:19.246 "name": "malloc0" 00:06:19.246 }, 00:06:19.246 "method": "bdev_malloc_create" 00:06:19.246 }, 00:06:19.246 { 00:06:19.246 "params": { 00:06:19.246 "block_size": 512, 00:06:19.246 "num_blocks": 1048576, 00:06:19.246 "name": "malloc1" 00:06:19.246 }, 00:06:19.246 "method": "bdev_malloc_create" 00:06:19.246 }, 00:06:19.246 { 00:06:19.246 "method": "bdev_wait_for_examine" 00:06:19.246 } 00:06:19.246 ] 00:06:19.246 } 00:06:19.246 ] 00:06:19.246 } 00:06:19.246 [2024-11-05 09:29:05.108172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.246 [2024-11-05 09:29:05.138467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.246 [2024-11-05 09:29:05.166973] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:20.625  [2024-11-05T09:29:07.537Z] Copying: 236/512 [MB] (236 MBps) [2024-11-05T09:29:07.806Z] Copying: 468/512 [MB] (232 MBps) [2024-11-05T09:29:08.065Z] Copying: 512/512 [MB] (average 234 MBps) 00:06:22.107 00:06:22.107 ************************************ 00:06:22.107 END TEST dd_malloc_copy 00:06:22.107 ************************************ 00:06:22.107 00:06:22.107 real 0m5.841s 00:06:22.107 user 0m5.237s 00:06:22.107 sys 0m0.454s 00:06:22.107 09:29:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.107 09:29:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.107 ************************************ 00:06:22.107 END TEST spdk_dd_malloc 00:06:22.107 ************************************ 00:06:22.107 00:06:22.107 real 0m6.083s 00:06:22.107 user 0m5.374s 00:06:22.107 sys 0m0.558s 00:06:22.107 09:29:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.107 09:29:07 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:06:22.107 09:29:07 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:22.107 09:29:07 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:06:22.107 09:29:07 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.107 09:29:07 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:22.107 ************************************ 00:06:22.107 START TEST spdk_dd_bdev_to_bdev 00:06:22.107 ************************************ 00:06:22.107 09:29:07 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:06:22.107 * Looking for test storage... 00:06:22.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:22.107 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.107 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.107 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.367 --rc genhtml_branch_coverage=1 00:06:22.367 --rc genhtml_function_coverage=1 00:06:22.367 --rc genhtml_legend=1 00:06:22.367 --rc geninfo_all_blocks=1 00:06:22.367 --rc geninfo_unexecuted_blocks=1 00:06:22.367 00:06:22.367 ' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:22.367 ************************************ 00:06:22.367 START TEST dd_inflate_file 00:06:22.367 ************************************ 00:06:22.367 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:06:22.367 [2024-11-05 09:29:08.191997] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:22.367 [2024-11-05 09:29:08.192238] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60697 ] 00:06:22.627 [2024-11-05 09:29:08.328133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.627 [2024-11-05 09:29:08.355983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.627 [2024-11-05 09:29:08.382383] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.627  [2024-11-05T09:29:08.585Z] Copying: 64/64 [MB] (average 1641 MBps) 00:06:22.627 00:06:22.627 00:06:22.627 real 0m0.403s 00:06:22.627 user 0m0.216s 00:06:22.627 sys 0m0.203s 00:06:22.627 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.627 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:06:22.627 ************************************ 00:06:22.627 END TEST dd_inflate_file 00:06:22.627 ************************************ 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:22.886 ************************************ 00:06:22.886 START TEST dd_copy_to_out_bdev 00:06:22.886 ************************************ 00:06:22.886 09:29:08 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:06:22.886 { 00:06:22.886 "subsystems": [ 00:06:22.886 { 00:06:22.886 "subsystem": "bdev", 00:06:22.886 "config": [ 00:06:22.886 { 00:06:22.886 "params": { 00:06:22.886 "trtype": "pcie", 00:06:22.886 "traddr": "0000:00:10.0", 00:06:22.886 "name": "Nvme0" 00:06:22.886 }, 00:06:22.886 "method": "bdev_nvme_attach_controller" 00:06:22.886 }, 00:06:22.886 { 00:06:22.886 "params": { 00:06:22.886 "trtype": "pcie", 00:06:22.886 "traddr": "0000:00:11.0", 00:06:22.886 "name": "Nvme1" 00:06:22.886 }, 00:06:22.886 "method": "bdev_nvme_attach_controller" 00:06:22.886 }, 00:06:22.886 { 00:06:22.886 "method": "bdev_wait_for_examine" 00:06:22.886 } 00:06:22.886 ] 00:06:22.886 } 00:06:22.886 ] 00:06:22.886 } 00:06:22.886 [2024-11-05 09:29:08.664106] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:22.886 [2024-11-05 09:29:08.664342] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60725 ] 00:06:22.886 [2024-11-05 09:29:08.809304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.886 [2024-11-05 09:29:08.843306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.145 [2024-11-05 09:29:08.873077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:24.080  [2024-11-05T09:29:10.297Z] Copying: 51/64 [MB] (51 MBps) [2024-11-05T09:29:10.557Z] Copying: 64/64 [MB] (average 51 MBps) 00:06:24.599 00:06:24.599 00:06:24.599 real 0m1.801s 00:06:24.599 user 0m1.629s 00:06:24.599 sys 0m1.445s 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.599 ************************************ 00:06:24.599 END TEST dd_copy_to_out_bdev 00:06:24.599 ************************************ 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:24.599 ************************************ 00:06:24.599 START TEST dd_offset_magic 00:06:24.599 ************************************ 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1127 -- # offset_magic 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:24.599 09:29:10 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:24.599 [2024-11-05 09:29:10.505014] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:24.599 [2024-11-05 09:29:10.505302] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60770 ] 00:06:24.599 { 00:06:24.599 "subsystems": [ 00:06:24.599 { 00:06:24.599 "subsystem": "bdev", 00:06:24.599 "config": [ 00:06:24.599 { 00:06:24.599 "params": { 00:06:24.599 "trtype": "pcie", 00:06:24.599 "traddr": "0000:00:10.0", 00:06:24.599 "name": "Nvme0" 00:06:24.599 }, 00:06:24.599 "method": "bdev_nvme_attach_controller" 00:06:24.599 }, 00:06:24.599 { 00:06:24.599 "params": { 00:06:24.599 "trtype": "pcie", 00:06:24.599 "traddr": "0000:00:11.0", 00:06:24.599 "name": "Nvme1" 00:06:24.599 }, 00:06:24.599 "method": "bdev_nvme_attach_controller" 00:06:24.599 }, 00:06:24.599 { 00:06:24.599 "method": "bdev_wait_for_examine" 00:06:24.599 } 00:06:24.599 ] 00:06:24.599 } 00:06:24.599 ] 00:06:24.599 } 00:06:24.858 [2024-11-05 09:29:10.641426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.858 [2024-11-05 09:29:10.669833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.858 [2024-11-05 09:29:10.699102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.118  [2024-11-05T09:29:11.076Z] Copying: 65/65 [MB] (average 984 MBps) 00:06:25.118 00:06:25.118 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:06:25.118 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:25.118 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:25.118 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:25.376 [2024-11-05 09:29:11.125676] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:25.376 [2024-11-05 09:29:11.125946] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60785 ] 00:06:25.376 { 00:06:25.376 "subsystems": [ 00:06:25.376 { 00:06:25.376 "subsystem": "bdev", 00:06:25.376 "config": [ 00:06:25.376 { 00:06:25.376 "params": { 00:06:25.376 "trtype": "pcie", 00:06:25.376 "traddr": "0000:00:10.0", 00:06:25.376 "name": "Nvme0" 00:06:25.376 }, 00:06:25.376 "method": "bdev_nvme_attach_controller" 00:06:25.376 }, 00:06:25.376 { 00:06:25.376 "params": { 00:06:25.376 "trtype": "pcie", 00:06:25.376 "traddr": "0000:00:11.0", 00:06:25.376 "name": "Nvme1" 00:06:25.376 }, 00:06:25.376 "method": "bdev_nvme_attach_controller" 00:06:25.376 }, 00:06:25.376 { 00:06:25.376 "method": "bdev_wait_for_examine" 00:06:25.376 } 00:06:25.376 ] 00:06:25.376 } 00:06:25.376 ] 00:06:25.376 } 00:06:25.376 [2024-11-05 09:29:11.270246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.376 [2024-11-05 09:29:11.296878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.376 [2024-11-05 09:29:11.326380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.634  [2024-11-05T09:29:11.592Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:25.634 00:06:25.892 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:25.892 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:25.892 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:06:25.892 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:06:25.892 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:06:25.892 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:25.892 09:29:11 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:25.892 [2024-11-05 09:29:11.656759] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:25.892 [2024-11-05 09:29:11.657050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60801 ] 00:06:25.892 { 00:06:25.892 "subsystems": [ 00:06:25.892 { 00:06:25.892 "subsystem": "bdev", 00:06:25.892 "config": [ 00:06:25.892 { 00:06:25.892 "params": { 00:06:25.892 "trtype": "pcie", 00:06:25.892 "traddr": "0000:00:10.0", 00:06:25.892 "name": "Nvme0" 00:06:25.892 }, 00:06:25.892 "method": "bdev_nvme_attach_controller" 00:06:25.892 }, 00:06:25.892 { 00:06:25.892 "params": { 00:06:25.892 "trtype": "pcie", 00:06:25.892 "traddr": "0000:00:11.0", 00:06:25.892 "name": "Nvme1" 00:06:25.892 }, 00:06:25.892 "method": "bdev_nvme_attach_controller" 00:06:25.892 }, 00:06:25.892 { 00:06:25.892 "method": "bdev_wait_for_examine" 00:06:25.892 } 00:06:25.892 ] 00:06:25.892 } 00:06:25.892 ] 00:06:25.892 } 00:06:25.892 [2024-11-05 09:29:11.800200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.892 [2024-11-05 09:29:11.829015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.151 [2024-11-05 09:29:11.857664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.151  [2024-11-05T09:29:12.368Z] Copying: 65/65 [MB] (average 1000 MBps) 00:06:26.410 00:06:26.410 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:06:26.410 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:06:26.410 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:06:26.410 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:26.410 [2024-11-05 09:29:12.294085] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:26.410 [2024-11-05 09:29:12.294170] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60821 ] 00:06:26.410 { 00:06:26.410 "subsystems": [ 00:06:26.410 { 00:06:26.410 "subsystem": "bdev", 00:06:26.410 "config": [ 00:06:26.410 { 00:06:26.410 "params": { 00:06:26.410 "trtype": "pcie", 00:06:26.410 "traddr": "0000:00:10.0", 00:06:26.410 "name": "Nvme0" 00:06:26.410 }, 00:06:26.410 "method": "bdev_nvme_attach_controller" 00:06:26.410 }, 00:06:26.410 { 00:06:26.410 "params": { 00:06:26.410 "trtype": "pcie", 00:06:26.410 "traddr": "0000:00:11.0", 00:06:26.410 "name": "Nvme1" 00:06:26.410 }, 00:06:26.410 "method": "bdev_nvme_attach_controller" 00:06:26.410 }, 00:06:26.410 { 00:06:26.410 "method": "bdev_wait_for_examine" 00:06:26.410 } 00:06:26.410 ] 00:06:26.410 } 00:06:26.410 ] 00:06:26.410 } 00:06:26.669 [2024-11-05 09:29:12.433786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.669 [2024-11-05 09:29:12.463278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.669 [2024-11-05 09:29:12.497062] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:26.928  [2024-11-05T09:29:12.886Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:06:26.928 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:06:26.928 00:06:26.928 real 0m2.318s 00:06:26.928 user 0m1.761s 00:06:26.928 sys 0m0.564s 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.928 ************************************ 00:06:26.928 END TEST dd_offset_magic 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:06:26.928 ************************************ 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:26.928 09:29:12 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:26.928 [2024-11-05 09:29:12.873314] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:26.928 [2024-11-05 09:29:12.873417] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60853 ] 00:06:26.928 { 00:06:26.928 "subsystems": [ 00:06:26.928 { 00:06:26.928 "subsystem": "bdev", 00:06:26.928 "config": [ 00:06:26.928 { 00:06:26.928 "params": { 00:06:26.928 "trtype": "pcie", 00:06:26.928 "traddr": "0000:00:10.0", 00:06:26.928 "name": "Nvme0" 00:06:26.928 }, 00:06:26.928 "method": "bdev_nvme_attach_controller" 00:06:26.928 }, 00:06:26.928 { 00:06:26.928 "params": { 00:06:26.928 "trtype": "pcie", 00:06:26.928 "traddr": "0000:00:11.0", 00:06:26.928 "name": "Nvme1" 00:06:26.928 }, 00:06:26.928 "method": "bdev_nvme_attach_controller" 00:06:26.928 }, 00:06:26.928 { 00:06:26.928 "method": "bdev_wait_for_examine" 00:06:26.928 } 00:06:26.928 ] 00:06:26.928 } 00:06:26.928 ] 00:06:26.928 } 00:06:27.187 [2024-11-05 09:29:13.020514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.187 [2024-11-05 09:29:13.048972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.187 [2024-11-05 09:29:13.077778] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.446  [2024-11-05T09:29:13.404Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:06:27.446 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:27.446 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:27.446 [2024-11-05 09:29:13.406011] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:27.446 [2024-11-05 09:29:13.406114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60868 ] 00:06:27.705 { 00:06:27.705 "subsystems": [ 00:06:27.705 { 00:06:27.705 "subsystem": "bdev", 00:06:27.705 "config": [ 00:06:27.705 { 00:06:27.705 "params": { 00:06:27.705 "trtype": "pcie", 00:06:27.705 "traddr": "0000:00:10.0", 00:06:27.705 "name": "Nvme0" 00:06:27.705 }, 00:06:27.705 "method": "bdev_nvme_attach_controller" 00:06:27.705 }, 00:06:27.705 { 00:06:27.705 "params": { 00:06:27.705 "trtype": "pcie", 00:06:27.705 "traddr": "0000:00:11.0", 00:06:27.705 "name": "Nvme1" 00:06:27.705 }, 00:06:27.705 "method": "bdev_nvme_attach_controller" 00:06:27.705 }, 00:06:27.705 { 00:06:27.705 "method": "bdev_wait_for_examine" 00:06:27.705 } 00:06:27.705 ] 00:06:27.705 } 00:06:27.705 ] 00:06:27.705 } 00:06:27.705 [2024-11-05 09:29:13.543885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.705 [2024-11-05 09:29:13.570713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.705 [2024-11-05 09:29:13.597384] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:27.962  [2024-11-05T09:29:13.920Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:06:27.962 00:06:27.962 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:06:27.962 00:06:27.962 real 0m5.950s 00:06:27.963 user 0m4.555s 00:06:27.963 sys 0m2.728s 00:06:27.963 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.963 09:29:13 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:27.963 ************************************ 00:06:27.963 END TEST spdk_dd_bdev_to_bdev 00:06:27.963 ************************************ 00:06:28.222 09:29:13 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:06:28.222 09:29:13 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:28.222 09:29:13 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.222 09:29:13 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.222 09:29:13 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:28.222 ************************************ 00:06:28.222 START TEST spdk_dd_uring 00:06:28.222 ************************************ 00:06:28.222 09:29:13 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:06:28.222 * Looking for test storage... 00:06:28.222 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lcov --version 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:28.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.222 --rc genhtml_branch_coverage=1 00:06:28.222 --rc genhtml_function_coverage=1 00:06:28.222 --rc genhtml_legend=1 00:06:28.222 --rc geninfo_all_blocks=1 00:06:28.222 --rc geninfo_unexecuted_blocks=1 00:06:28.222 00:06:28.222 ' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:28.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.222 --rc genhtml_branch_coverage=1 00:06:28.222 --rc genhtml_function_coverage=1 00:06:28.222 --rc genhtml_legend=1 00:06:28.222 --rc geninfo_all_blocks=1 00:06:28.222 --rc geninfo_unexecuted_blocks=1 00:06:28.222 00:06:28.222 ' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:28.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.222 --rc genhtml_branch_coverage=1 00:06:28.222 --rc genhtml_function_coverage=1 00:06:28.222 --rc genhtml_legend=1 00:06:28.222 --rc geninfo_all_blocks=1 00:06:28.222 --rc geninfo_unexecuted_blocks=1 00:06:28.222 00:06:28.222 ' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:28.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.222 --rc genhtml_branch_coverage=1 00:06:28.222 --rc genhtml_function_coverage=1 00:06:28.222 --rc genhtml_legend=1 00:06:28.222 --rc geninfo_all_blocks=1 00:06:28.222 --rc geninfo_unexecuted_blocks=1 00:06:28.222 00:06:28.222 ' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:28.222 ************************************ 00:06:28.222 START TEST dd_uring_copy 00:06:28.222 ************************************ 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1127 -- # uring_zram_copy 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=pa1w9q2vn9gtzw9zi2n9gk180p6tkw05fo99b74fighru6d11svntkwaw3fsc1if4t7vy94b98sjfarl6jo4myrbkqyplsb3fd2a2bjvdomcx5eq1haaqnvu2pus60k5bbn03irq924vg50gig742hxks81a8weq6zp11ibwl84gxs2lgai2j7rpadug8cnr9wn2y78sppuix0vx1s71cisx2th73ilc7hd7q19cazozlw1xs9j4yieh20ylc7fcwj09ytpuikuvmnsa2e7w8c539vl80mq2lbm2anqd70oq7b61lq123u6gl6t2wk92v3226wl8kgu3fcdsohbhgfs9ekry7ijb4bmr9hlmgtlb4h2eg8jtnfmydh0bsnxtojl268vlosp3ni4cy6sjj704z7naw5qz20bp56i0corc4skn7vbyh2pyqvy0j86t03cpsxtz0v08sldu8ih95jpf0g4orhaqhw1nbu491oatrmzql7neu6rfom1haw3jg4jyhduab1ntgxugak3c5ri7uepbnt8jplwr9qdmj0k7cye5w4o7fxni6b9yjk7xvpof9ha37rtfmifvovfwwlbe9atfnom1fzbzwnn1segy85g9v89quifmzuz7n1ui4z7p6cs86kiaddb917c2ykq8vbd4xv55nhfq3jxz3abk7du409mad5kxv90k6t4496qfnjzkw3lbmjhcp5uhklozj15u2alukybvgapa10ewixrxxv2ddpcvdpljg55inl0umqcrv8o12obp29qojlq2bhbxknbi7ify5hpuswvb35mkd01uitp7ah841dcu75qxrz3u7w0x3gh6p9ngayl6327m8kjzbitrf2tv2abbyj982l3435fj6exexl6589nrz94j0o3yfdwx6pv2mecizr2wk7mxkqbqcysk66ftqzossc2u3nprb4wl7pnxll60exgcyddu15nw6chpe9obqzrfatkj9fmjsfwkhclgout0vde6jaxw4q0wcn2s 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo pa1w9q2vn9gtzw9zi2n9gk180p6tkw05fo99b74fighru6d11svntkwaw3fsc1if4t7vy94b98sjfarl6jo4myrbkqyplsb3fd2a2bjvdomcx5eq1haaqnvu2pus60k5bbn03irq924vg50gig742hxks81a8weq6zp11ibwl84gxs2lgai2j7rpadug8cnr9wn2y78sppuix0vx1s71cisx2th73ilc7hd7q19cazozlw1xs9j4yieh20ylc7fcwj09ytpuikuvmnsa2e7w8c539vl80mq2lbm2anqd70oq7b61lq123u6gl6t2wk92v3226wl8kgu3fcdsohbhgfs9ekry7ijb4bmr9hlmgtlb4h2eg8jtnfmydh0bsnxtojl268vlosp3ni4cy6sjj704z7naw5qz20bp56i0corc4skn7vbyh2pyqvy0j86t03cpsxtz0v08sldu8ih95jpf0g4orhaqhw1nbu491oatrmzql7neu6rfom1haw3jg4jyhduab1ntgxugak3c5ri7uepbnt8jplwr9qdmj0k7cye5w4o7fxni6b9yjk7xvpof9ha37rtfmifvovfwwlbe9atfnom1fzbzwnn1segy85g9v89quifmzuz7n1ui4z7p6cs86kiaddb917c2ykq8vbd4xv55nhfq3jxz3abk7du409mad5kxv90k6t4496qfnjzkw3lbmjhcp5uhklozj15u2alukybvgapa10ewixrxxv2ddpcvdpljg55inl0umqcrv8o12obp29qojlq2bhbxknbi7ify5hpuswvb35mkd01uitp7ah841dcu75qxrz3u7w0x3gh6p9ngayl6327m8kjzbitrf2tv2abbyj982l3435fj6exexl6589nrz94j0o3yfdwx6pv2mecizr2wk7mxkqbqcysk66ftqzossc2u3nprb4wl7pnxll60exgcyddu15nw6chpe9obqzrfatkj9fmjsfwkhclgout0vde6jaxw4q0wcn2s 00:06:28.222 09:29:14 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:06:28.480 [2024-11-05 09:29:14.197358] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:28.481 [2024-11-05 09:29:14.197444] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60942 ] 00:06:28.481 [2024-11-05 09:29:14.338137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.481 [2024-11-05 09:29:14.365461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.481 [2024-11-05 09:29:14.395398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.046  [2024-11-05T09:29:15.263Z] Copying: 511/511 [MB] (average 1471 MBps) 00:06:29.305 00:06:29.305 09:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:06:29.305 09:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:06:29.305 09:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:29.305 09:29:15 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:29.305 [2024-11-05 09:29:15.136429] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:29.305 [2024-11-05 09:29:15.136531] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60959 ] 00:06:29.305 { 00:06:29.305 "subsystems": [ 00:06:29.305 { 00:06:29.305 "subsystem": "bdev", 00:06:29.305 "config": [ 00:06:29.305 { 00:06:29.305 "params": { 00:06:29.305 "block_size": 512, 00:06:29.305 "num_blocks": 1048576, 00:06:29.305 "name": "malloc0" 00:06:29.305 }, 00:06:29.305 "method": "bdev_malloc_create" 00:06:29.305 }, 00:06:29.305 { 00:06:29.305 "params": { 00:06:29.305 "filename": "/dev/zram1", 00:06:29.305 "name": "uring0" 00:06:29.305 }, 00:06:29.305 "method": "bdev_uring_create" 00:06:29.305 }, 00:06:29.305 { 00:06:29.305 "method": "bdev_wait_for_examine" 00:06:29.305 } 00:06:29.305 ] 00:06:29.305 } 00:06:29.305 ] 00:06:29.305 } 00:06:29.564 [2024-11-05 09:29:15.277076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.564 [2024-11-05 09:29:15.303694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.564 [2024-11-05 09:29:15.331464] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.941  [2024-11-05T09:29:17.466Z] Copying: 233/512 [MB] (233 MBps) [2024-11-05T09:29:17.724Z] Copying: 455/512 [MB] (222 MBps) [2024-11-05T09:29:17.983Z] Copying: 512/512 [MB] (average 229 MBps) 00:06:32.025 00:06:32.025 09:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:06:32.025 09:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:06:32.025 09:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:32.025 09:29:17 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:32.025 [2024-11-05 09:29:17.964056] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:32.025 [2024-11-05 09:29:17.964156] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61003 ] 00:06:32.025 { 00:06:32.025 "subsystems": [ 00:06:32.025 { 00:06:32.025 "subsystem": "bdev", 00:06:32.025 "config": [ 00:06:32.025 { 00:06:32.025 "params": { 00:06:32.025 "block_size": 512, 00:06:32.025 "num_blocks": 1048576, 00:06:32.025 "name": "malloc0" 00:06:32.025 }, 00:06:32.025 "method": "bdev_malloc_create" 00:06:32.025 }, 00:06:32.025 { 00:06:32.025 "params": { 00:06:32.025 "filename": "/dev/zram1", 00:06:32.025 "name": "uring0" 00:06:32.025 }, 00:06:32.025 "method": "bdev_uring_create" 00:06:32.025 }, 00:06:32.025 { 00:06:32.025 "method": "bdev_wait_for_examine" 00:06:32.025 } 00:06:32.025 ] 00:06:32.025 } 00:06:32.025 ] 00:06:32.025 } 00:06:32.285 [2024-11-05 09:29:18.106619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.285 [2024-11-05 09:29:18.136745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.285 [2024-11-05 09:29:18.165772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:33.662  [2024-11-05T09:29:20.561Z] Copying: 175/512 [MB] (175 MBps) [2024-11-05T09:29:21.502Z] Copying: 344/512 [MB] (169 MBps) [2024-11-05T09:29:21.502Z] Copying: 507/512 [MB] (162 MBps) [2024-11-05T09:29:21.762Z] Copying: 512/512 [MB] (average 169 MBps) 00:06:35.804 00:06:35.804 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:06:35.804 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ pa1w9q2vn9gtzw9zi2n9gk180p6tkw05fo99b74fighru6d11svntkwaw3fsc1if4t7vy94b98sjfarl6jo4myrbkqyplsb3fd2a2bjvdomcx5eq1haaqnvu2pus60k5bbn03irq924vg50gig742hxks81a8weq6zp11ibwl84gxs2lgai2j7rpadug8cnr9wn2y78sppuix0vx1s71cisx2th73ilc7hd7q19cazozlw1xs9j4yieh20ylc7fcwj09ytpuikuvmnsa2e7w8c539vl80mq2lbm2anqd70oq7b61lq123u6gl6t2wk92v3226wl8kgu3fcdsohbhgfs9ekry7ijb4bmr9hlmgtlb4h2eg8jtnfmydh0bsnxtojl268vlosp3ni4cy6sjj704z7naw5qz20bp56i0corc4skn7vbyh2pyqvy0j86t03cpsxtz0v08sldu8ih95jpf0g4orhaqhw1nbu491oatrmzql7neu6rfom1haw3jg4jyhduab1ntgxugak3c5ri7uepbnt8jplwr9qdmj0k7cye5w4o7fxni6b9yjk7xvpof9ha37rtfmifvovfwwlbe9atfnom1fzbzwnn1segy85g9v89quifmzuz7n1ui4z7p6cs86kiaddb917c2ykq8vbd4xv55nhfq3jxz3abk7du409mad5kxv90k6t4496qfnjzkw3lbmjhcp5uhklozj15u2alukybvgapa10ewixrxxv2ddpcvdpljg55inl0umqcrv8o12obp29qojlq2bhbxknbi7ify5hpuswvb35mkd01uitp7ah841dcu75qxrz3u7w0x3gh6p9ngayl6327m8kjzbitrf2tv2abbyj982l3435fj6exexl6589nrz94j0o3yfdwx6pv2mecizr2wk7mxkqbqcysk66ftqzossc2u3nprb4wl7pnxll60exgcyddu15nw6chpe9obqzrfatkj9fmjsfwkhclgout0vde6jaxw4q0wcn2s == \p\a\1\w\9\q\2\v\n\9\g\t\z\w\9\z\i\2\n\9\g\k\1\8\0\p\6\t\k\w\0\5\f\o\9\9\b\7\4\f\i\g\h\r\u\6\d\1\1\s\v\n\t\k\w\a\w\3\f\s\c\1\i\f\4\t\7\v\y\9\4\b\9\8\s\j\f\a\r\l\6\j\o\4\m\y\r\b\k\q\y\p\l\s\b\3\f\d\2\a\2\b\j\v\d\o\m\c\x\5\e\q\1\h\a\a\q\n\v\u\2\p\u\s\6\0\k\5\b\b\n\0\3\i\r\q\9\2\4\v\g\5\0\g\i\g\7\4\2\h\x\k\s\8\1\a\8\w\e\q\6\z\p\1\1\i\b\w\l\8\4\g\x\s\2\l\g\a\i\2\j\7\r\p\a\d\u\g\8\c\n\r\9\w\n\2\y\7\8\s\p\p\u\i\x\0\v\x\1\s\7\1\c\i\s\x\2\t\h\7\3\i\l\c\7\h\d\7\q\1\9\c\a\z\o\z\l\w\1\x\s\9\j\4\y\i\e\h\2\0\y\l\c\7\f\c\w\j\0\9\y\t\p\u\i\k\u\v\m\n\s\a\2\e\7\w\8\c\5\3\9\v\l\8\0\m\q\2\l\b\m\2\a\n\q\d\7\0\o\q\7\b\6\1\l\q\1\2\3\u\6\g\l\6\t\2\w\k\9\2\v\3\2\2\6\w\l\8\k\g\u\3\f\c\d\s\o\h\b\h\g\f\s\9\e\k\r\y\7\i\j\b\4\b\m\r\9\h\l\m\g\t\l\b\4\h\2\e\g\8\j\t\n\f\m\y\d\h\0\b\s\n\x\t\o\j\l\2\6\8\v\l\o\s\p\3\n\i\4\c\y\6\s\j\j\7\0\4\z\7\n\a\w\5\q\z\2\0\b\p\5\6\i\0\c\o\r\c\4\s\k\n\7\v\b\y\h\2\p\y\q\v\y\0\j\8\6\t\0\3\c\p\s\x\t\z\0\v\0\8\s\l\d\u\8\i\h\9\5\j\p\f\0\g\4\o\r\h\a\q\h\w\1\n\b\u\4\9\1\o\a\t\r\m\z\q\l\7\n\e\u\6\r\f\o\m\1\h\a\w\3\j\g\4\j\y\h\d\u\a\b\1\n\t\g\x\u\g\a\k\3\c\5\r\i\7\u\e\p\b\n\t\8\j\p\l\w\r\9\q\d\m\j\0\k\7\c\y\e\5\w\4\o\7\f\x\n\i\6\b\9\y\j\k\7\x\v\p\o\f\9\h\a\3\7\r\t\f\m\i\f\v\o\v\f\w\w\l\b\e\9\a\t\f\n\o\m\1\f\z\b\z\w\n\n\1\s\e\g\y\8\5\g\9\v\8\9\q\u\i\f\m\z\u\z\7\n\1\u\i\4\z\7\p\6\c\s\8\6\k\i\a\d\d\b\9\1\7\c\2\y\k\q\8\v\b\d\4\x\v\5\5\n\h\f\q\3\j\x\z\3\a\b\k\7\d\u\4\0\9\m\a\d\5\k\x\v\9\0\k\6\t\4\4\9\6\q\f\n\j\z\k\w\3\l\b\m\j\h\c\p\5\u\h\k\l\o\z\j\1\5\u\2\a\l\u\k\y\b\v\g\a\p\a\1\0\e\w\i\x\r\x\x\v\2\d\d\p\c\v\d\p\l\j\g\5\5\i\n\l\0\u\m\q\c\r\v\8\o\1\2\o\b\p\2\9\q\o\j\l\q\2\b\h\b\x\k\n\b\i\7\i\f\y\5\h\p\u\s\w\v\b\3\5\m\k\d\0\1\u\i\t\p\7\a\h\8\4\1\d\c\u\7\5\q\x\r\z\3\u\7\w\0\x\3\g\h\6\p\9\n\g\a\y\l\6\3\2\7\m\8\k\j\z\b\i\t\r\f\2\t\v\2\a\b\b\y\j\9\8\2\l\3\4\3\5\f\j\6\e\x\e\x\l\6\5\8\9\n\r\z\9\4\j\0\o\3\y\f\d\w\x\6\p\v\2\m\e\c\i\z\r\2\w\k\7\m\x\k\q\b\q\c\y\s\k\6\6\f\t\q\z\o\s\s\c\2\u\3\n\p\r\b\4\w\l\7\p\n\x\l\l\6\0\e\x\g\c\y\d\d\u\1\5\n\w\6\c\h\p\e\9\o\b\q\z\r\f\a\t\k\j\9\f\m\j\s\f\w\k\h\c\l\g\o\u\t\0\v\d\e\6\j\a\x\w\4\q\0\w\c\n\2\s ]] 00:06:35.804 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:06:35.804 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ pa1w9q2vn9gtzw9zi2n9gk180p6tkw05fo99b74fighru6d11svntkwaw3fsc1if4t7vy94b98sjfarl6jo4myrbkqyplsb3fd2a2bjvdomcx5eq1haaqnvu2pus60k5bbn03irq924vg50gig742hxks81a8weq6zp11ibwl84gxs2lgai2j7rpadug8cnr9wn2y78sppuix0vx1s71cisx2th73ilc7hd7q19cazozlw1xs9j4yieh20ylc7fcwj09ytpuikuvmnsa2e7w8c539vl80mq2lbm2anqd70oq7b61lq123u6gl6t2wk92v3226wl8kgu3fcdsohbhgfs9ekry7ijb4bmr9hlmgtlb4h2eg8jtnfmydh0bsnxtojl268vlosp3ni4cy6sjj704z7naw5qz20bp56i0corc4skn7vbyh2pyqvy0j86t03cpsxtz0v08sldu8ih95jpf0g4orhaqhw1nbu491oatrmzql7neu6rfom1haw3jg4jyhduab1ntgxugak3c5ri7uepbnt8jplwr9qdmj0k7cye5w4o7fxni6b9yjk7xvpof9ha37rtfmifvovfwwlbe9atfnom1fzbzwnn1segy85g9v89quifmzuz7n1ui4z7p6cs86kiaddb917c2ykq8vbd4xv55nhfq3jxz3abk7du409mad5kxv90k6t4496qfnjzkw3lbmjhcp5uhklozj15u2alukybvgapa10ewixrxxv2ddpcvdpljg55inl0umqcrv8o12obp29qojlq2bhbxknbi7ify5hpuswvb35mkd01uitp7ah841dcu75qxrz3u7w0x3gh6p9ngayl6327m8kjzbitrf2tv2abbyj982l3435fj6exexl6589nrz94j0o3yfdwx6pv2mecizr2wk7mxkqbqcysk66ftqzossc2u3nprb4wl7pnxll60exgcyddu15nw6chpe9obqzrfatkj9fmjsfwkhclgout0vde6jaxw4q0wcn2s == \p\a\1\w\9\q\2\v\n\9\g\t\z\w\9\z\i\2\n\9\g\k\1\8\0\p\6\t\k\w\0\5\f\o\9\9\b\7\4\f\i\g\h\r\u\6\d\1\1\s\v\n\t\k\w\a\w\3\f\s\c\1\i\f\4\t\7\v\y\9\4\b\9\8\s\j\f\a\r\l\6\j\o\4\m\y\r\b\k\q\y\p\l\s\b\3\f\d\2\a\2\b\j\v\d\o\m\c\x\5\e\q\1\h\a\a\q\n\v\u\2\p\u\s\6\0\k\5\b\b\n\0\3\i\r\q\9\2\4\v\g\5\0\g\i\g\7\4\2\h\x\k\s\8\1\a\8\w\e\q\6\z\p\1\1\i\b\w\l\8\4\g\x\s\2\l\g\a\i\2\j\7\r\p\a\d\u\g\8\c\n\r\9\w\n\2\y\7\8\s\p\p\u\i\x\0\v\x\1\s\7\1\c\i\s\x\2\t\h\7\3\i\l\c\7\h\d\7\q\1\9\c\a\z\o\z\l\w\1\x\s\9\j\4\y\i\e\h\2\0\y\l\c\7\f\c\w\j\0\9\y\t\p\u\i\k\u\v\m\n\s\a\2\e\7\w\8\c\5\3\9\v\l\8\0\m\q\2\l\b\m\2\a\n\q\d\7\0\o\q\7\b\6\1\l\q\1\2\3\u\6\g\l\6\t\2\w\k\9\2\v\3\2\2\6\w\l\8\k\g\u\3\f\c\d\s\o\h\b\h\g\f\s\9\e\k\r\y\7\i\j\b\4\b\m\r\9\h\l\m\g\t\l\b\4\h\2\e\g\8\j\t\n\f\m\y\d\h\0\b\s\n\x\t\o\j\l\2\6\8\v\l\o\s\p\3\n\i\4\c\y\6\s\j\j\7\0\4\z\7\n\a\w\5\q\z\2\0\b\p\5\6\i\0\c\o\r\c\4\s\k\n\7\v\b\y\h\2\p\y\q\v\y\0\j\8\6\t\0\3\c\p\s\x\t\z\0\v\0\8\s\l\d\u\8\i\h\9\5\j\p\f\0\g\4\o\r\h\a\q\h\w\1\n\b\u\4\9\1\o\a\t\r\m\z\q\l\7\n\e\u\6\r\f\o\m\1\h\a\w\3\j\g\4\j\y\h\d\u\a\b\1\n\t\g\x\u\g\a\k\3\c\5\r\i\7\u\e\p\b\n\t\8\j\p\l\w\r\9\q\d\m\j\0\k\7\c\y\e\5\w\4\o\7\f\x\n\i\6\b\9\y\j\k\7\x\v\p\o\f\9\h\a\3\7\r\t\f\m\i\f\v\o\v\f\w\w\l\b\e\9\a\t\f\n\o\m\1\f\z\b\z\w\n\n\1\s\e\g\y\8\5\g\9\v\8\9\q\u\i\f\m\z\u\z\7\n\1\u\i\4\z\7\p\6\c\s\8\6\k\i\a\d\d\b\9\1\7\c\2\y\k\q\8\v\b\d\4\x\v\5\5\n\h\f\q\3\j\x\z\3\a\b\k\7\d\u\4\0\9\m\a\d\5\k\x\v\9\0\k\6\t\4\4\9\6\q\f\n\j\z\k\w\3\l\b\m\j\h\c\p\5\u\h\k\l\o\z\j\1\5\u\2\a\l\u\k\y\b\v\g\a\p\a\1\0\e\w\i\x\r\x\x\v\2\d\d\p\c\v\d\p\l\j\g\5\5\i\n\l\0\u\m\q\c\r\v\8\o\1\2\o\b\p\2\9\q\o\j\l\q\2\b\h\b\x\k\n\b\i\7\i\f\y\5\h\p\u\s\w\v\b\3\5\m\k\d\0\1\u\i\t\p\7\a\h\8\4\1\d\c\u\7\5\q\x\r\z\3\u\7\w\0\x\3\g\h\6\p\9\n\g\a\y\l\6\3\2\7\m\8\k\j\z\b\i\t\r\f\2\t\v\2\a\b\b\y\j\9\8\2\l\3\4\3\5\f\j\6\e\x\e\x\l\6\5\8\9\n\r\z\9\4\j\0\o\3\y\f\d\w\x\6\p\v\2\m\e\c\i\z\r\2\w\k\7\m\x\k\q\b\q\c\y\s\k\6\6\f\t\q\z\o\s\s\c\2\u\3\n\p\r\b\4\w\l\7\p\n\x\l\l\6\0\e\x\g\c\y\d\d\u\1\5\n\w\6\c\h\p\e\9\o\b\q\z\r\f\a\t\k\j\9\f\m\j\s\f\w\k\h\c\l\g\o\u\t\0\v\d\e\6\j\a\x\w\4\q\0\w\c\n\2\s ]] 00:06:35.804 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:36.064 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:06:36.064 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:06:36.064 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:36.064 09:29:21 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:36.064 [2024-11-05 09:29:21.970089] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:36.064 [2024-11-05 09:29:21.970185] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61060 ] 00:06:36.064 { 00:06:36.064 "subsystems": [ 00:06:36.064 { 00:06:36.064 "subsystem": "bdev", 00:06:36.064 "config": [ 00:06:36.064 { 00:06:36.064 "params": { 00:06:36.064 "block_size": 512, 00:06:36.064 "num_blocks": 1048576, 00:06:36.064 "name": "malloc0" 00:06:36.064 }, 00:06:36.064 "method": "bdev_malloc_create" 00:06:36.064 }, 00:06:36.064 { 00:06:36.064 "params": { 00:06:36.064 "filename": "/dev/zram1", 00:06:36.064 "name": "uring0" 00:06:36.064 }, 00:06:36.064 "method": "bdev_uring_create" 00:06:36.064 }, 00:06:36.064 { 00:06:36.064 "method": "bdev_wait_for_examine" 00:06:36.064 } 00:06:36.064 ] 00:06:36.064 } 00:06:36.064 ] 00:06:36.064 } 00:06:36.323 [2024-11-05 09:29:22.117867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.323 [2024-11-05 09:29:22.147552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.323 [2024-11-05 09:29:22.176302] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:37.700  [2024-11-05T09:29:24.597Z] Copying: 161/512 [MB] (161 MBps) [2024-11-05T09:29:25.537Z] Copying: 323/512 [MB] (161 MBps) [2024-11-05T09:29:25.537Z] Copying: 480/512 [MB] (156 MBps) [2024-11-05T09:29:25.797Z] Copying: 512/512 [MB] (average 160 MBps) 00:06:39.839 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:39.839 09:29:25 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:39.839 [2024-11-05 09:29:25.770214] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:39.839 [2024-11-05 09:29:25.770961] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61111 ] 00:06:39.839 { 00:06:39.839 "subsystems": [ 00:06:39.839 { 00:06:39.839 "subsystem": "bdev", 00:06:39.839 "config": [ 00:06:39.839 { 00:06:39.839 "params": { 00:06:39.839 "block_size": 512, 00:06:39.839 "num_blocks": 1048576, 00:06:39.839 "name": "malloc0" 00:06:39.839 }, 00:06:39.839 "method": "bdev_malloc_create" 00:06:39.839 }, 00:06:39.839 { 00:06:39.839 "params": { 00:06:39.839 "filename": "/dev/zram1", 00:06:39.839 "name": "uring0" 00:06:39.839 }, 00:06:39.839 "method": "bdev_uring_create" 00:06:39.839 }, 00:06:39.839 { 00:06:39.839 "params": { 00:06:39.839 "name": "uring0" 00:06:39.839 }, 00:06:39.839 "method": "bdev_uring_delete" 00:06:39.839 }, 00:06:39.839 { 00:06:39.839 "method": "bdev_wait_for_examine" 00:06:39.839 } 00:06:39.839 ] 00:06:39.839 } 00:06:39.839 ] 00:06:39.839 } 00:06:40.098 [2024-11-05 09:29:25.919182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.098 [2024-11-05 09:29:25.949730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.099 [2024-11-05 09:29:25.977671] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.358  [2024-11-05T09:29:26.316Z] Copying: 0/0 [B] (average 0 Bps) 00:06:40.358 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:40.618 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:06:40.618 [2024-11-05 09:29:26.379744] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:40.618 [2024-11-05 09:29:26.379909] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61144 ] 00:06:40.618 { 00:06:40.618 "subsystems": [ 00:06:40.618 { 00:06:40.618 "subsystem": "bdev", 00:06:40.618 "config": [ 00:06:40.618 { 00:06:40.618 "params": { 00:06:40.618 "block_size": 512, 00:06:40.618 "num_blocks": 1048576, 00:06:40.618 "name": "malloc0" 00:06:40.618 }, 00:06:40.618 "method": "bdev_malloc_create" 00:06:40.618 }, 00:06:40.618 { 00:06:40.618 "params": { 00:06:40.618 "filename": "/dev/zram1", 00:06:40.618 "name": "uring0" 00:06:40.618 }, 00:06:40.618 "method": "bdev_uring_create" 00:06:40.618 }, 00:06:40.618 { 00:06:40.618 "params": { 00:06:40.618 "name": "uring0" 00:06:40.618 }, 00:06:40.618 "method": "bdev_uring_delete" 00:06:40.618 }, 00:06:40.618 { 00:06:40.618 "method": "bdev_wait_for_examine" 00:06:40.618 } 00:06:40.618 ] 00:06:40.618 } 00:06:40.618 ] 00:06:40.618 } 00:06:40.618 [2024-11-05 09:29:26.527976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.618 [2024-11-05 09:29:26.556882] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.878 [2024-11-05 09:29:26.586442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:40.878 [2024-11-05 09:29:26.709937] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:06:40.878 [2024-11-05 09:29:26.710012] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:06:40.878 [2024-11-05 09:29:26.710038] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:06:40.878 [2024-11-05 09:29:26.710047] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:41.137 [2024-11-05 09:29:26.872700] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:41.137 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:06:41.137 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:41.137 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:06:41.138 09:29:26 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:06:41.397 00:06:41.397 real 0m13.093s 00:06:41.397 user 0m8.793s 00:06:41.397 sys 0m11.530s 00:06:41.397 09:29:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.397 09:29:27 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:06:41.397 ************************************ 00:06:41.397 END TEST dd_uring_copy 00:06:41.397 ************************************ 00:06:41.397 ************************************ 00:06:41.397 END TEST spdk_dd_uring 00:06:41.397 ************************************ 00:06:41.397 00:06:41.397 real 0m13.321s 00:06:41.397 user 0m8.934s 00:06:41.397 sys 0m11.625s 00:06:41.397 09:29:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:41.397 09:29:27 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:06:41.397 09:29:27 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:41.397 09:29:27 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.397 09:29:27 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.397 09:29:27 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:41.397 ************************************ 00:06:41.397 START TEST spdk_dd_sparse 00:06:41.397 ************************************ 00:06:41.397 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:06:41.658 * Looking for test storage... 00:06:41.658 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lcov --version 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.658 --rc genhtml_branch_coverage=1 00:06:41.658 --rc genhtml_function_coverage=1 00:06:41.658 --rc genhtml_legend=1 00:06:41.658 --rc geninfo_all_blocks=1 00:06:41.658 --rc geninfo_unexecuted_blocks=1 00:06:41.658 00:06:41.658 ' 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.658 --rc genhtml_branch_coverage=1 00:06:41.658 --rc genhtml_function_coverage=1 00:06:41.658 --rc genhtml_legend=1 00:06:41.658 --rc geninfo_all_blocks=1 00:06:41.658 --rc geninfo_unexecuted_blocks=1 00:06:41.658 00:06:41.658 ' 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.658 --rc genhtml_branch_coverage=1 00:06:41.658 --rc genhtml_function_coverage=1 00:06:41.658 --rc genhtml_legend=1 00:06:41.658 --rc geninfo_all_blocks=1 00:06:41.658 --rc geninfo_unexecuted_blocks=1 00:06:41.658 00:06:41.658 ' 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:41.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.658 --rc genhtml_branch_coverage=1 00:06:41.658 --rc genhtml_function_coverage=1 00:06:41.658 --rc genhtml_legend=1 00:06:41.658 --rc geninfo_all_blocks=1 00:06:41.658 --rc geninfo_unexecuted_blocks=1 00:06:41.658 00:06:41.658 ' 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:06:41.658 1+0 records in 00:06:41.658 1+0 records out 00:06:41.658 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00520751 s, 805 MB/s 00:06:41.658 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:06:41.659 1+0 records in 00:06:41.659 1+0 records out 00:06:41.659 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00535491 s, 783 MB/s 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:06:41.659 1+0 records in 00:06:41.659 1+0 records out 00:06:41.659 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00448798 s, 935 MB/s 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:41.659 ************************************ 00:06:41.659 START TEST dd_sparse_file_to_file 00:06:41.659 ************************************ 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1127 -- # file_to_file 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:41.659 09:29:27 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:41.659 [2024-11-05 09:29:27.614593] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:41.659 [2024-11-05 09:29:27.614705] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61242 ] 00:06:41.917 { 00:06:41.917 "subsystems": [ 00:06:41.917 { 00:06:41.917 "subsystem": "bdev", 00:06:41.917 "config": [ 00:06:41.917 { 00:06:41.917 "params": { 00:06:41.917 "block_size": 4096, 00:06:41.917 "filename": "dd_sparse_aio_disk", 00:06:41.917 "name": "dd_aio" 00:06:41.917 }, 00:06:41.917 "method": "bdev_aio_create" 00:06:41.917 }, 00:06:41.917 { 00:06:41.917 "params": { 00:06:41.917 "lvs_name": "dd_lvstore", 00:06:41.917 "bdev_name": "dd_aio" 00:06:41.917 }, 00:06:41.917 "method": "bdev_lvol_create_lvstore" 00:06:41.917 }, 00:06:41.917 { 00:06:41.917 "method": "bdev_wait_for_examine" 00:06:41.917 } 00:06:41.917 ] 00:06:41.917 } 00:06:41.917 ] 00:06:41.917 } 00:06:41.917 [2024-11-05 09:29:27.761527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.917 [2024-11-05 09:29:27.791045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.917 [2024-11-05 09:29:27.821058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.176  [2024-11-05T09:29:28.134Z] Copying: 12/36 [MB] (average 923 MBps) 00:06:42.176 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:42.176 00:06:42.176 real 0m0.515s 00:06:42.176 user 0m0.321s 00:06:42.176 sys 0m0.241s 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:42.176 ************************************ 00:06:42.176 END TEST dd_sparse_file_to_file 00:06:42.176 ************************************ 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:42.176 ************************************ 00:06:42.176 START TEST dd_sparse_file_to_bdev 00:06:42.176 ************************************ 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1127 -- # file_to_bdev 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:06:42.176 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:06:42.177 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:06:42.177 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:06:42.177 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:06:42.177 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:42.436 [2024-11-05 09:29:28.182286] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:42.436 [2024-11-05 09:29:28.182406] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61290 ] 00:06:42.436 { 00:06:42.436 "subsystems": [ 00:06:42.436 { 00:06:42.436 "subsystem": "bdev", 00:06:42.436 "config": [ 00:06:42.436 { 00:06:42.436 "params": { 00:06:42.436 "block_size": 4096, 00:06:42.436 "filename": "dd_sparse_aio_disk", 00:06:42.436 "name": "dd_aio" 00:06:42.436 }, 00:06:42.436 "method": "bdev_aio_create" 00:06:42.436 }, 00:06:42.436 { 00:06:42.436 "params": { 00:06:42.436 "lvs_name": "dd_lvstore", 00:06:42.436 "lvol_name": "dd_lvol", 00:06:42.436 "size_in_mib": 36, 00:06:42.436 "thin_provision": true 00:06:42.436 }, 00:06:42.436 "method": "bdev_lvol_create" 00:06:42.436 }, 00:06:42.436 { 00:06:42.436 "method": "bdev_wait_for_examine" 00:06:42.436 } 00:06:42.436 ] 00:06:42.436 } 00:06:42.436 ] 00:06:42.436 } 00:06:42.436 [2024-11-05 09:29:28.328380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.436 [2024-11-05 09:29:28.357285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.436 [2024-11-05 09:29:28.386122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.696  [2024-11-05T09:29:28.654Z] Copying: 12/36 [MB] (average 500 MBps) 00:06:42.696 00:06:42.696 00:06:42.696 real 0m0.473s 00:06:42.696 user 0m0.302s 00:06:42.696 sys 0m0.226s 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:06:42.696 ************************************ 00:06:42.696 END TEST dd_sparse_file_to_bdev 00:06:42.696 ************************************ 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:42.696 ************************************ 00:06:42.696 START TEST dd_sparse_bdev_to_file 00:06:42.696 ************************************ 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1127 -- # bdev_to_file 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:06:42.696 09:29:28 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:42.956 [2024-11-05 09:29:28.699047] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:42.956 [2024-11-05 09:29:28.699153] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61317 ] 00:06:42.956 { 00:06:42.956 "subsystems": [ 00:06:42.956 { 00:06:42.956 "subsystem": "bdev", 00:06:42.956 "config": [ 00:06:42.956 { 00:06:42.956 "params": { 00:06:42.956 "block_size": 4096, 00:06:42.956 "filename": "dd_sparse_aio_disk", 00:06:42.956 "name": "dd_aio" 00:06:42.956 }, 00:06:42.956 "method": "bdev_aio_create" 00:06:42.956 }, 00:06:42.956 { 00:06:42.956 "method": "bdev_wait_for_examine" 00:06:42.956 } 00:06:42.956 ] 00:06:42.956 } 00:06:42.956 ] 00:06:42.956 } 00:06:42.956 [2024-11-05 09:29:28.840848] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.956 [2024-11-05 09:29:28.869566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.956 [2024-11-05 09:29:28.898209] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.215  [2024-11-05T09:29:29.173Z] Copying: 12/36 [MB] (average 1090 MBps) 00:06:43.215 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:06:43.215 00:06:43.215 real 0m0.476s 00:06:43.215 user 0m0.294s 00:06:43.215 sys 0m0.242s 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.215 ************************************ 00:06:43.215 END TEST dd_sparse_bdev_to_file 00:06:43.215 ************************************ 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:06:43.215 09:29:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:06:43.475 09:29:29 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:06:43.475 00:06:43.475 real 0m1.878s 00:06:43.475 user 0m1.109s 00:06:43.475 sys 0m0.927s 00:06:43.475 09:29:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.475 09:29:29 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:06:43.475 ************************************ 00:06:43.475 END TEST spdk_dd_sparse 00:06:43.475 ************************************ 00:06:43.475 09:29:29 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:43.475 09:29:29 spdk_dd -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.475 09:29:29 spdk_dd -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.475 09:29:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:43.475 ************************************ 00:06:43.475 START TEST spdk_dd_negative 00:06:43.475 ************************************ 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:06:43.475 * Looking for test storage... 00:06:43.475 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lcov --version 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.475 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.735 --rc genhtml_branch_coverage=1 00:06:43.735 --rc genhtml_function_coverage=1 00:06:43.735 --rc genhtml_legend=1 00:06:43.735 --rc geninfo_all_blocks=1 00:06:43.735 --rc geninfo_unexecuted_blocks=1 00:06:43.735 00:06:43.735 ' 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.735 --rc genhtml_branch_coverage=1 00:06:43.735 --rc genhtml_function_coverage=1 00:06:43.735 --rc genhtml_legend=1 00:06:43.735 --rc geninfo_all_blocks=1 00:06:43.735 --rc geninfo_unexecuted_blocks=1 00:06:43.735 00:06:43.735 ' 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.735 --rc genhtml_branch_coverage=1 00:06:43.735 --rc genhtml_function_coverage=1 00:06:43.735 --rc genhtml_legend=1 00:06:43.735 --rc geninfo_all_blocks=1 00:06:43.735 --rc geninfo_unexecuted_blocks=1 00:06:43.735 00:06:43.735 ' 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:43.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.735 --rc genhtml_branch_coverage=1 00:06:43.735 --rc genhtml_function_coverage=1 00:06:43.735 --rc genhtml_legend=1 00:06:43.735 --rc geninfo_all_blocks=1 00:06:43.735 --rc geninfo_unexecuted_blocks=1 00:06:43.735 00:06:43.735 ' 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:43.735 ************************************ 00:06:43.735 START TEST dd_invalid_arguments 00:06:43.735 ************************************ 00:06:43.735 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1127 -- # invalid_arguments 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:06:43.736 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:06:43.736 00:06:43.736 CPU options: 00:06:43.736 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:06:43.736 (like [0,1,10]) 00:06:43.736 --lcores lcore to CPU mapping list. The list is in the format: 00:06:43.736 [<,lcores[@CPUs]>...] 00:06:43.736 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:06:43.736 Within the group, '-' is used for range separator, 00:06:43.736 ',' is used for single number separator. 00:06:43.736 '( )' can be omitted for single element group, 00:06:43.736 '@' can be omitted if cpus and lcores have the same value 00:06:43.736 --disable-cpumask-locks Disable CPU core lock files. 00:06:43.736 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:06:43.736 pollers in the app support interrupt mode) 00:06:43.736 -p, --main-core main (primary) core for DPDK 00:06:43.736 00:06:43.736 Configuration options: 00:06:43.736 -c, --config, --json JSON config file 00:06:43.736 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:06:43.736 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:06:43.736 --wait-for-rpc wait for RPCs to initialize subsystems 00:06:43.736 --rpcs-allowed comma-separated list of permitted RPCS 00:06:43.736 --json-ignore-init-errors don't exit on invalid config entry 00:06:43.736 00:06:43.736 Memory options: 00:06:43.736 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:06:43.736 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:06:43.736 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:06:43.736 -R, --huge-unlink unlink huge files after initialization 00:06:43.736 -n, --mem-channels number of memory channels used for DPDK 00:06:43.736 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:06:43.736 --msg-mempool-size global message memory pool size in count (default: 262143) 00:06:43.736 --no-huge run without using hugepages 00:06:43.736 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:06:43.736 -i, --shm-id shared memory ID (optional) 00:06:43.736 -g, --single-file-segments force creating just one hugetlbfs file 00:06:43.736 00:06:43.736 PCI options: 00:06:43.736 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:06:43.736 -B, --pci-blocked pci addr to block (can be used more than once) 00:06:43.736 -u, --no-pci disable PCI access 00:06:43.736 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:06:43.736 00:06:43.736 Log options: 00:06:43.736 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:06:43.736 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:06:43.736 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:06:43.736 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:06:43.736 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:06:43.736 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:06:43.736 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:06:43.736 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:06:43.736 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:06:43.736 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:06:43.736 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:06:43.736 --silence-noticelog disable notice level logging to stderr 00:06:43.736 00:06:43.736 Trace options: 00:06:43.736 --num-trace-entries number of trace entries for each core, must be power of 2, 00:06:43.736 setting 0 to disable trace (default 32768) 00:06:43.736 Tracepoints vary in size and can use more than one trace entry. 00:06:43.736 -e, --tpoint-group [:] 00:06:43.736 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:06:43.736 [2024-11-05 09:29:29.521645] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:06:43.736 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:06:43.736 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:06:43.736 bdev_raid, scheduler, all). 00:06:43.736 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:06:43.736 a tracepoint group. First tpoint inside a group can be enabled by 00:06:43.736 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:06:43.736 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:06:43.736 in /include/spdk_internal/trace_defs.h 00:06:43.736 00:06:43.736 Other options: 00:06:43.736 -h, --help show this usage 00:06:43.736 -v, --version print SPDK version 00:06:43.736 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:06:43.736 --env-context Opaque context for use of the env implementation 00:06:43.736 00:06:43.736 Application specific: 00:06:43.736 [--------- DD Options ---------] 00:06:43.736 --if Input file. Must specify either --if or --ib. 00:06:43.736 --ib Input bdev. Must specifier either --if or --ib 00:06:43.736 --of Output file. Must specify either --of or --ob. 00:06:43.736 --ob Output bdev. Must specify either --of or --ob. 00:06:43.736 --iflag Input file flags. 00:06:43.736 --oflag Output file flags. 00:06:43.736 --bs I/O unit size (default: 4096) 00:06:43.736 --qd Queue depth (default: 2) 00:06:43.736 --count I/O unit count. The number of I/O units to copy. (default: all) 00:06:43.736 --skip Skip this many I/O units at start of input. (default: 0) 00:06:43.736 --seek Skip this many I/O units at start of output. (default: 0) 00:06:43.736 --aio Force usage of AIO. (by default io_uring is used if available) 00:06:43.736 --sparse Enable hole skipping in input target 00:06:43.736 Available iflag and oflag values: 00:06:43.736 append - append mode 00:06:43.736 direct - use direct I/O for data 00:06:43.736 directory - fail unless a directory 00:06:43.736 dsync - use synchronized I/O for data 00:06:43.736 noatime - do not update access time 00:06:43.736 noctty - do not assign controlling terminal from file 00:06:43.736 nofollow - do not follow symlinks 00:06:43.736 nonblock - use non-blocking I/O 00:06:43.736 sync - use synchronized I/O for data and metadata 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.736 00:06:43.736 real 0m0.079s 00:06:43.736 user 0m0.050s 00:06:43.736 sys 0m0.028s 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.736 ************************************ 00:06:43.736 END TEST dd_invalid_arguments 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:06:43.736 ************************************ 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:43.736 ************************************ 00:06:43.736 START TEST dd_double_input 00:06:43.736 ************************************ 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1127 -- # double_input 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.736 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:06:43.737 [2024-11-05 09:29:29.658255] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.737 00:06:43.737 real 0m0.081s 00:06:43.737 user 0m0.043s 00:06:43.737 sys 0m0.037s 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.737 ************************************ 00:06:43.737 END TEST dd_double_input 00:06:43.737 09:29:29 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:06:43.737 ************************************ 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:43.996 ************************************ 00:06:43.996 START TEST dd_double_output 00:06:43.996 ************************************ 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1127 -- # double_output 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:06:43.996 [2024-11-05 09:29:29.796229] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:43.996 00:06:43.996 real 0m0.083s 00:06:43.996 user 0m0.049s 00:06:43.996 sys 0m0.032s 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:43.996 ************************************ 00:06:43.996 END TEST dd_double_output 00:06:43.996 ************************************ 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:43.996 ************************************ 00:06:43.996 START TEST dd_no_input 00:06:43.996 ************************************ 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1127 -- # no_input 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:06:43.996 [2024-11-05 09:29:29.934667] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:43.996 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.258 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.258 00:06:44.258 real 0m0.083s 00:06:44.258 user 0m0.057s 00:06:44.258 sys 0m0.025s 00:06:44.258 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.258 ************************************ 00:06:44.258 09:29:29 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:06:44.258 END TEST dd_no_input 00:06:44.258 ************************************ 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:44.258 ************************************ 00:06:44.258 START TEST dd_no_output 00:06:44.258 ************************************ 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1127 -- # no_output 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:06:44.258 [2024-11-05 09:29:30.077177] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.258 00:06:44.258 real 0m0.080s 00:06:44.258 user 0m0.053s 00:06:44.258 sys 0m0.026s 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.258 ************************************ 00:06:44.258 END TEST dd_no_output 00:06:44.258 ************************************ 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:44.258 ************************************ 00:06:44.258 START TEST dd_wrong_blocksize 00:06:44.258 ************************************ 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1127 -- # wrong_blocksize 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.258 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:06:44.258 [2024-11-05 09:29:30.208750] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:44.518 00:06:44.518 real 0m0.078s 00:06:44.518 user 0m0.045s 00:06:44.518 sys 0m0.031s 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:44.518 ************************************ 00:06:44.518 END TEST dd_wrong_blocksize 00:06:44.518 ************************************ 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:44.518 ************************************ 00:06:44.518 START TEST dd_smaller_blocksize 00:06:44.518 ************************************ 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1127 -- # smaller_blocksize 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:44.518 09:29:30 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:06:44.518 [2024-11-05 09:29:30.344416] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:44.518 [2024-11-05 09:29:30.344551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61549 ] 00:06:44.778 [2024-11-05 09:29:30.498037] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.778 [2024-11-05 09:29:30.538796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.778 [2024-11-05 09:29:30.573813] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:45.037 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:45.296 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:06:45.296 [2024-11-05 09:29:31.074561] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:06:45.296 [2024-11-05 09:29:31.074627] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.296 [2024-11-05 09:29:31.140952] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.296 00:06:45.296 real 0m0.917s 00:06:45.296 user 0m0.331s 00:06:45.296 sys 0m0.478s 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.296 09:29:31 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:06:45.296 ************************************ 00:06:45.296 END TEST dd_smaller_blocksize 00:06:45.297 ************************************ 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:45.297 ************************************ 00:06:45.297 START TEST dd_invalid_count 00:06:45.297 ************************************ 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1127 -- # invalid_count 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.297 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:06:45.556 [2024-11-05 09:29:31.313709] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.556 00:06:45.556 real 0m0.078s 00:06:45.556 user 0m0.048s 00:06:45.556 sys 0m0.029s 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.556 ************************************ 00:06:45.556 END TEST dd_invalid_count 00:06:45.556 ************************************ 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:45.556 ************************************ 00:06:45.556 START TEST dd_invalid_oflag 00:06:45.556 ************************************ 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1127 -- # invalid_oflag 00:06:45.556 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:06:45.557 [2024-11-05 09:29:31.428312] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.557 00:06:45.557 real 0m0.060s 00:06:45.557 user 0m0.039s 00:06:45.557 sys 0m0.021s 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.557 ************************************ 00:06:45.557 END TEST dd_invalid_oflag 00:06:45.557 ************************************ 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:45.557 ************************************ 00:06:45.557 START TEST dd_invalid_iflag 00:06:45.557 ************************************ 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1127 -- # invalid_iflag 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.557 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:06:45.816 [2024-11-05 09:29:31.551559] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:45.816 00:06:45.816 real 0m0.079s 00:06:45.816 user 0m0.039s 00:06:45.816 sys 0m0.038s 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:06:45.816 ************************************ 00:06:45.816 END TEST dd_invalid_iflag 00:06:45.816 ************************************ 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:45.816 ************************************ 00:06:45.816 START TEST dd_unknown_flag 00:06:45.816 ************************************ 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1127 -- # unknown_flag 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:45.816 09:29:31 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:06:45.816 [2024-11-05 09:29:31.686162] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:45.816 [2024-11-05 09:29:31.686294] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61641 ] 00:06:46.075 [2024-11-05 09:29:31.835099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.075 [2024-11-05 09:29:31.866083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.075 [2024-11-05 09:29:31.895931] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.075 [2024-11-05 09:29:31.914911] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:46.075 [2024-11-05 09:29:31.914988] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.075 [2024-11-05 09:29:31.915042] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:06:46.075 [2024-11-05 09:29:31.915072] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.075 [2024-11-05 09:29:31.915315] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:06:46.075 [2024-11-05 09:29:31.915332] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.075 [2024-11-05 09:29:31.915380] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:46.075 [2024-11-05 09:29:31.915392] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:06:46.075 [2024-11-05 09:29:31.986180] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.334 00:06:46.334 real 0m0.423s 00:06:46.334 user 0m0.221s 00:06:46.334 sys 0m0.110s 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.334 ************************************ 00:06:46.334 END TEST dd_unknown_flag 00:06:46.334 ************************************ 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.334 ************************************ 00:06:46.334 START TEST dd_invalid_json 00:06:46.334 ************************************ 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1127 -- # invalid_json 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.334 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:06:46.334 [2024-11-05 09:29:32.156952] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:46.334 [2024-11-05 09:29:32.157072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61664 ] 00:06:46.593 [2024-11-05 09:29:32.303767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.593 [2024-11-05 09:29:32.334453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.593 [2024-11-05 09:29:32.334558] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:06:46.593 [2024-11-05 09:29:32.334575] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:46.593 [2024-11-05 09:29:32.334585] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:46.593 [2024-11-05 09:29:32.334621] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:46.593 00:06:46.593 real 0m0.294s 00:06:46.593 user 0m0.142s 00:06:46.593 sys 0m0.051s 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:06:46.593 ************************************ 00:06:46.593 END TEST dd_invalid_json 00:06:46.593 ************************************ 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:06:46.593 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:46.594 ************************************ 00:06:46.594 START TEST dd_invalid_seek 00:06:46.594 ************************************ 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1127 -- # invalid_seek 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:46.594 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:06:46.594 { 00:06:46.594 "subsystems": [ 00:06:46.594 { 00:06:46.594 "subsystem": "bdev", 00:06:46.594 "config": [ 00:06:46.594 { 00:06:46.594 "params": { 00:06:46.594 "block_size": 512, 00:06:46.594 "num_blocks": 512, 00:06:46.594 "name": "malloc0" 00:06:46.594 }, 00:06:46.594 "method": "bdev_malloc_create" 00:06:46.594 }, 00:06:46.594 { 00:06:46.594 "params": { 00:06:46.594 "block_size": 512, 00:06:46.594 "num_blocks": 512, 00:06:46.594 "name": "malloc1" 00:06:46.594 }, 00:06:46.594 "method": "bdev_malloc_create" 00:06:46.594 }, 00:06:46.594 { 00:06:46.594 "method": "bdev_wait_for_examine" 00:06:46.594 } 00:06:46.594 ] 00:06:46.594 } 00:06:46.594 ] 00:06:46.594 } 00:06:46.594 [2024-11-05 09:29:32.510114] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:46.594 [2024-11-05 09:29:32.510209] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61699 ] 00:06:46.852 [2024-11-05 09:29:32.658735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.852 [2024-11-05 09:29:32.689671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.852 [2024-11-05 09:29:32.720637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.852 [2024-11-05 09:29:32.768066] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:06:46.852 [2024-11-05 09:29:32.768137] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.110 [2024-11-05 09:29:32.839210] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.110 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:06:47.110 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.110 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:06:47.110 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:06:47.110 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:06:47.110 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.110 00:06:47.110 real 0m0.453s 00:06:47.110 user 0m0.299s 00:06:47.110 sys 0m0.116s 00:06:47.110 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.111 ************************************ 00:06:47.111 END TEST dd_invalid_seek 00:06:47.111 ************************************ 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:47.111 ************************************ 00:06:47.111 START TEST dd_invalid_skip 00:06:47.111 ************************************ 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1127 -- # invalid_skip 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.111 09:29:32 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:06:47.111 { 00:06:47.111 "subsystems": [ 00:06:47.111 { 00:06:47.111 "subsystem": "bdev", 00:06:47.111 "config": [ 00:06:47.111 { 00:06:47.111 "params": { 00:06:47.111 "block_size": 512, 00:06:47.111 "num_blocks": 512, 00:06:47.111 "name": "malloc0" 00:06:47.111 }, 00:06:47.111 "method": "bdev_malloc_create" 00:06:47.111 }, 00:06:47.111 { 00:06:47.111 "params": { 00:06:47.111 "block_size": 512, 00:06:47.111 "num_blocks": 512, 00:06:47.111 "name": "malloc1" 00:06:47.111 }, 00:06:47.111 "method": "bdev_malloc_create" 00:06:47.111 }, 00:06:47.111 { 00:06:47.111 "method": "bdev_wait_for_examine" 00:06:47.111 } 00:06:47.111 ] 00:06:47.111 } 00:06:47.111 ] 00:06:47.111 } 00:06:47.111 [2024-11-05 09:29:33.013142] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:47.111 [2024-11-05 09:29:33.013236] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61727 ] 00:06:47.370 [2024-11-05 09:29:33.166103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.370 [2024-11-05 09:29:33.206732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.370 [2024-11-05 09:29:33.244293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.370 [2024-11-05 09:29:33.295876] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:06:47.370 [2024-11-05 09:29:33.295934] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:47.629 [2024-11-05 09:29:33.372038] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:47.629 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:06:47.629 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:47.629 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:06:47.629 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:06:47.629 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:06:47.629 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:47.629 00:06:47.629 real 0m0.483s 00:06:47.629 user 0m0.321s 00:06:47.629 sys 0m0.119s 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:47.630 ************************************ 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:06:47.630 END TEST dd_invalid_skip 00:06:47.630 ************************************ 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:47.630 ************************************ 00:06:47.630 START TEST dd_invalid_input_count 00:06:47.630 ************************************ 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1127 -- # invalid_input_count 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:47.630 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:06:47.630 [2024-11-05 09:29:33.543156] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:47.630 [2024-11-05 09:29:33.543244] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61766 ] 00:06:47.630 { 00:06:47.630 "subsystems": [ 00:06:47.630 { 00:06:47.630 "subsystem": "bdev", 00:06:47.630 "config": [ 00:06:47.630 { 00:06:47.630 "params": { 00:06:47.630 "block_size": 512, 00:06:47.630 "num_blocks": 512, 00:06:47.630 "name": "malloc0" 00:06:47.630 }, 00:06:47.630 "method": "bdev_malloc_create" 00:06:47.630 }, 00:06:47.630 { 00:06:47.630 "params": { 00:06:47.630 "block_size": 512, 00:06:47.630 "num_blocks": 512, 00:06:47.630 "name": "malloc1" 00:06:47.630 }, 00:06:47.630 "method": "bdev_malloc_create" 00:06:47.630 }, 00:06:47.630 { 00:06:47.630 "method": "bdev_wait_for_examine" 00:06:47.630 } 00:06:47.630 ] 00:06:47.630 } 00:06:47.630 ] 00:06:47.630 } 00:06:47.888 [2024-11-05 09:29:33.688663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.888 [2024-11-05 09:29:33.723434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.888 [2024-11-05 09:29:33.756182] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:47.888 [2024-11-05 09:29:33.804764] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:06:47.888 [2024-11-05 09:29:33.804856] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.147 [2024-11-05 09:29:33.879040] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.147 00:06:48.147 real 0m0.449s 00:06:48.147 user 0m0.291s 00:06:48.147 sys 0m0.119s 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:06:48.147 ************************************ 00:06:48.147 END TEST dd_invalid_input_count 00:06:48.147 ************************************ 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.147 ************************************ 00:06:48.147 START TEST dd_invalid_output_count 00:06:48.147 ************************************ 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1127 -- # invalid_output_count 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.147 09:29:33 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:06:48.147 { 00:06:48.147 "subsystems": [ 00:06:48.147 { 00:06:48.147 "subsystem": "bdev", 00:06:48.147 "config": [ 00:06:48.147 { 00:06:48.147 "params": { 00:06:48.147 "block_size": 512, 00:06:48.147 "num_blocks": 512, 00:06:48.147 "name": "malloc0" 00:06:48.147 }, 00:06:48.147 "method": "bdev_malloc_create" 00:06:48.147 }, 00:06:48.147 { 00:06:48.147 "method": "bdev_wait_for_examine" 00:06:48.147 } 00:06:48.147 ] 00:06:48.147 } 00:06:48.147 ] 00:06:48.147 } 00:06:48.147 [2024-11-05 09:29:34.047103] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:48.147 [2024-11-05 09:29:34.047206] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61794 ] 00:06:48.406 [2024-11-05 09:29:34.194822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.406 [2024-11-05 09:29:34.228206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.406 [2024-11-05 09:29:34.261080] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.406 [2024-11-05 09:29:34.299061] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:06:48.406 [2024-11-05 09:29:34.299162] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.664 [2024-11-05 09:29:34.370399] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.664 00:06:48.664 real 0m0.448s 00:06:48.664 user 0m0.277s 00:06:48.664 sys 0m0.122s 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:06:48.664 ************************************ 00:06:48.664 END TEST dd_invalid_output_count 00:06:48.664 ************************************ 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:48.664 ************************************ 00:06:48.664 START TEST dd_bs_not_multiple 00:06:48.664 ************************************ 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1127 -- # bs_not_multiple 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:06:48.664 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:06:48.665 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:06:48.665 [2024-11-05 09:29:34.546248] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:48.665 [2024-11-05 09:29:34.546939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61830 ] 00:06:48.665 { 00:06:48.665 "subsystems": [ 00:06:48.665 { 00:06:48.665 "subsystem": "bdev", 00:06:48.665 "config": [ 00:06:48.665 { 00:06:48.665 "params": { 00:06:48.665 "block_size": 512, 00:06:48.665 "num_blocks": 512, 00:06:48.665 "name": "malloc0" 00:06:48.665 }, 00:06:48.665 "method": "bdev_malloc_create" 00:06:48.665 }, 00:06:48.665 { 00:06:48.665 "params": { 00:06:48.665 "block_size": 512, 00:06:48.665 "num_blocks": 512, 00:06:48.665 "name": "malloc1" 00:06:48.665 }, 00:06:48.665 "method": "bdev_malloc_create" 00:06:48.665 }, 00:06:48.665 { 00:06:48.665 "method": "bdev_wait_for_examine" 00:06:48.665 } 00:06:48.665 ] 00:06:48.665 } 00:06:48.665 ] 00:06:48.665 } 00:06:48.923 [2024-11-05 09:29:34.697387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.923 [2024-11-05 09:29:34.734271] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.923 [2024-11-05 09:29:34.768862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.923 [2024-11-05 09:29:34.813912] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:06:48.923 [2024-11-05 09:29:34.814005] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:48.923 [2024-11-05 09:29:34.878216] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:49.182 00:06:49.182 real 0m0.445s 00:06:49.182 user 0m0.281s 00:06:49.182 sys 0m0.127s 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:06:49.182 ************************************ 00:06:49.182 END TEST dd_bs_not_multiple 00:06:49.182 ************************************ 00:06:49.182 00:06:49.182 real 0m5.728s 00:06:49.182 user 0m2.993s 00:06:49.182 sys 0m2.153s 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.182 09:29:34 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:06:49.182 ************************************ 00:06:49.182 END TEST spdk_dd_negative 00:06:49.182 ************************************ 00:06:49.182 00:06:49.182 real 1m3.262s 00:06:49.182 user 0m40.050s 00:06:49.182 sys 0m27.078s 00:06:49.183 09:29:35 spdk_dd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.183 ************************************ 00:06:49.183 END TEST spdk_dd 00:06:49.183 ************************************ 00:06:49.183 09:29:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:06:49.183 09:29:35 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:49.183 09:29:35 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:06:49.183 09:29:35 -- spdk/autotest.sh@256 -- # timing_exit lib 00:06:49.183 09:29:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:49.183 09:29:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.183 09:29:35 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:06:49.183 09:29:35 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:06:49.183 09:29:35 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:06:49.183 09:29:35 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:06:49.183 09:29:35 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:06:49.183 09:29:35 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:06:49.183 09:29:35 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.183 09:29:35 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:49.183 09:29:35 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.183 09:29:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.183 ************************************ 00:06:49.183 START TEST nvmf_tcp 00:06:49.183 ************************************ 00:06:49.183 09:29:35 nvmf_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:49.442 * Looking for test storage... 00:06:49.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.442 09:29:35 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:49.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.442 --rc genhtml_branch_coverage=1 00:06:49.442 --rc genhtml_function_coverage=1 00:06:49.442 --rc genhtml_legend=1 00:06:49.442 --rc geninfo_all_blocks=1 00:06:49.442 --rc geninfo_unexecuted_blocks=1 00:06:49.442 00:06:49.442 ' 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:49.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.442 --rc genhtml_branch_coverage=1 00:06:49.442 --rc genhtml_function_coverage=1 00:06:49.442 --rc genhtml_legend=1 00:06:49.442 --rc geninfo_all_blocks=1 00:06:49.442 --rc geninfo_unexecuted_blocks=1 00:06:49.442 00:06:49.442 ' 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:49.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.442 --rc genhtml_branch_coverage=1 00:06:49.442 --rc genhtml_function_coverage=1 00:06:49.442 --rc genhtml_legend=1 00:06:49.442 --rc geninfo_all_blocks=1 00:06:49.442 --rc geninfo_unexecuted_blocks=1 00:06:49.442 00:06:49.442 ' 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:49.442 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.442 --rc genhtml_branch_coverage=1 00:06:49.442 --rc genhtml_function_coverage=1 00:06:49.442 --rc genhtml_legend=1 00:06:49.442 --rc geninfo_all_blocks=1 00:06:49.442 --rc geninfo_unexecuted_blocks=1 00:06:49.442 00:06:49.442 ' 00:06:49.442 09:29:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:49.442 09:29:35 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:49.442 09:29:35 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.442 09:29:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:49.442 ************************************ 00:06:49.442 START TEST nvmf_target_core 00:06:49.442 ************************************ 00:06:49.442 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:49.442 * Looking for test storage... 00:06:49.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:06:49.442 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:49.442 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:06:49.442 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:49.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.702 --rc genhtml_branch_coverage=1 00:06:49.702 --rc genhtml_function_coverage=1 00:06:49.702 --rc genhtml_legend=1 00:06:49.702 --rc geninfo_all_blocks=1 00:06:49.702 --rc geninfo_unexecuted_blocks=1 00:06:49.702 00:06:49.702 ' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:49.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.702 --rc genhtml_branch_coverage=1 00:06:49.702 --rc genhtml_function_coverage=1 00:06:49.702 --rc genhtml_legend=1 00:06:49.702 --rc geninfo_all_blocks=1 00:06:49.702 --rc geninfo_unexecuted_blocks=1 00:06:49.702 00:06:49.702 ' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:49.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.702 --rc genhtml_branch_coverage=1 00:06:49.702 --rc genhtml_function_coverage=1 00:06:49.702 --rc genhtml_legend=1 00:06:49.702 --rc geninfo_all_blocks=1 00:06:49.702 --rc geninfo_unexecuted_blocks=1 00:06:49.702 00:06:49.702 ' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:49.702 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.702 --rc genhtml_branch_coverage=1 00:06:49.702 --rc genhtml_function_coverage=1 00:06:49.702 --rc genhtml_legend=1 00:06:49.702 --rc geninfo_all_blocks=1 00:06:49.702 --rc geninfo_unexecuted_blocks=1 00:06:49.702 00:06:49.702 ' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.702 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.703 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:49.703 ************************************ 00:06:49.703 START TEST nvmf_host_management 00:06:49.703 ************************************ 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:49.703 * Looking for test storage... 00:06:49.703 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:06:49.703 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:49.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.963 --rc genhtml_branch_coverage=1 00:06:49.963 --rc genhtml_function_coverage=1 00:06:49.963 --rc genhtml_legend=1 00:06:49.963 --rc geninfo_all_blocks=1 00:06:49.963 --rc geninfo_unexecuted_blocks=1 00:06:49.963 00:06:49.963 ' 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:49.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.963 --rc genhtml_branch_coverage=1 00:06:49.963 --rc genhtml_function_coverage=1 00:06:49.963 --rc genhtml_legend=1 00:06:49.963 --rc geninfo_all_blocks=1 00:06:49.963 --rc geninfo_unexecuted_blocks=1 00:06:49.963 00:06:49.963 ' 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:49.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.963 --rc genhtml_branch_coverage=1 00:06:49.963 --rc genhtml_function_coverage=1 00:06:49.963 --rc genhtml_legend=1 00:06:49.963 --rc geninfo_all_blocks=1 00:06:49.963 --rc geninfo_unexecuted_blocks=1 00:06:49.963 00:06:49.963 ' 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:49.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:49.963 --rc genhtml_branch_coverage=1 00:06:49.963 --rc genhtml_function_coverage=1 00:06:49.963 --rc genhtml_legend=1 00:06:49.963 --rc geninfo_all_blocks=1 00:06:49.963 --rc geninfo_unexecuted_blocks=1 00:06:49.963 00:06:49.963 ' 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:49.963 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:49.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:49.964 Cannot find device "nvmf_init_br" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:49.964 Cannot find device "nvmf_init_br2" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:49.964 Cannot find device "nvmf_tgt_br" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:49.964 Cannot find device "nvmf_tgt_br2" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:49.964 Cannot find device "nvmf_init_br" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:49.964 Cannot find device "nvmf_init_br2" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:49.964 Cannot find device "nvmf_tgt_br" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:49.964 Cannot find device "nvmf_tgt_br2" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:49.964 Cannot find device "nvmf_br" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:49.964 Cannot find device "nvmf_init_if" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:49.964 Cannot find device "nvmf_init_if2" 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:49.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:06:49.964 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:49.964 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:49.965 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:06:49.965 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:49.965 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:49.965 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:49.965 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:49.965 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:49.965 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:50.223 09:29:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:50.223 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:50.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:50.224 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.121 ms 00:06:50.224 00:06:50.224 --- 10.0.0.3 ping statistics --- 00:06:50.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.224 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:50.224 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:50.224 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.091 ms 00:06:50.224 00:06:50.224 --- 10.0.0.4 ping statistics --- 00:06:50.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.224 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:50.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:06:50.224 00:06:50.224 --- 10.0.0.1 ping statistics --- 00:06:50.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.224 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:50.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:06:50.224 00:06:50.224 --- 10.0.0.2 ping statistics --- 00:06:50.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.224 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:50.224 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=62163 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 62163 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62163 ']' 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:50.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:50.483 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.483 [2024-11-05 09:29:36.270586] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:50.483 [2024-11-05 09:29:36.270668] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.483 [2024-11-05 09:29:36.430730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:50.752 [2024-11-05 09:29:36.475054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.752 [2024-11-05 09:29:36.475111] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.752 [2024-11-05 09:29:36.475126] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.752 [2024-11-05 09:29:36.475136] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.752 [2024-11-05 09:29:36.475144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.752 [2024-11-05 09:29:36.476074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.752 [2024-11-05 09:29:36.476180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:50.752 [2024-11-05 09:29:36.476307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:50.752 [2024-11-05 09:29:36.476314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.752 [2024-11-05 09:29:36.512027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 [2024-11-05 09:29:36.607372] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:50.752 Malloc0 00:06:50.752 [2024-11-05 09:29:36.672274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:50.752 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.753 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:50.753 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.753 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=62215 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 62215 /var/tmp/bdevperf.sock 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 62215 ']' 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:51.023 { 00:06:51.023 "params": { 00:06:51.023 "name": "Nvme$subsystem", 00:06:51.023 "trtype": "$TEST_TRANSPORT", 00:06:51.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:51.023 "adrfam": "ipv4", 00:06:51.023 "trsvcid": "$NVMF_PORT", 00:06:51.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:51.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:51.023 "hdgst": ${hdgst:-false}, 00:06:51.023 "ddgst": ${ddgst:-false} 00:06:51.023 }, 00:06:51.023 "method": "bdev_nvme_attach_controller" 00:06:51.023 } 00:06:51.023 EOF 00:06:51.023 )") 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:51.023 09:29:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:51.023 "params": { 00:06:51.023 "name": "Nvme0", 00:06:51.023 "trtype": "tcp", 00:06:51.023 "traddr": "10.0.0.3", 00:06:51.023 "adrfam": "ipv4", 00:06:51.023 "trsvcid": "4420", 00:06:51.023 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:51.023 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:51.023 "hdgst": false, 00:06:51.023 "ddgst": false 00:06:51.023 }, 00:06:51.023 "method": "bdev_nvme_attach_controller" 00:06:51.023 }' 00:06:51.023 [2024-11-05 09:29:36.790339] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:51.023 [2024-11-05 09:29:36.790451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62215 ] 00:06:51.023 [2024-11-05 09:29:36.952624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.282 [2024-11-05 09:29:36.992127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.282 [2024-11-05 09:29:37.034261] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.282 Running I/O for 10 seconds... 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.282 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.541 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.541 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:51.541 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:51.541 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.802 09:29:37 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:51.802 [2024-11-05 09:29:37.596569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.596983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.596998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.802 [2024-11-05 09:29:37.597164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.802 [2024-11-05 09:29:37.597176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.803 [2024-11-05 09:29:37.597804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.803 [2024-11-05 09:29:37.597812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.597988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.597999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.598008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.598019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:51.804 [2024-11-05 09:29:37.598030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.598041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a52d0 is same with the state(6) to be set 00:06:51.804 [2024-11-05 09:29:37.598170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.804 [2024-11-05 09:29:37.598188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.598200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.804 [2024-11-05 09:29:37.598209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.598218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.804 [2024-11-05 09:29:37.598227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.598237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:51.804 [2024-11-05 09:29:37.598246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:51.804 [2024-11-05 09:29:37.598255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23aace0 is same with the state(6) to be set 00:06:51.804 [2024-11-05 09:29:37.599556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:51.804 task offset: 81920 on job bdev=Nvme0n1 fails 00:06:51.804 00:06:51.804 Latency(us) 00:06:51.804 [2024-11-05T09:29:37.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:51.804 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:51.804 Job: Nvme0n1 ended in about 0.46 seconds with error 00:06:51.804 Verification LBA range: start 0x0 length 0x400 00:06:51.804 Nvme0n1 : 0.46 1390.87 86.93 139.09 0.00 40220.45 2159.71 45041.11 00:06:51.804 [2024-11-05T09:29:37.762Z] =================================================================================================================== 00:06:51.804 [2024-11-05T09:29:37.762Z] Total : 1390.87 86.93 139.09 0.00 40220.45 2159.71 45041.11 00:06:51.804 [2024-11-05 09:29:37.601703] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.804 [2024-11-05 09:29:37.601732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23aace0 (9): Bad file descriptor 00:06:51.804 [2024-11-05 09:29:37.604635] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 62215 00:06:52.741 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (62215) - No such process 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:06:52.741 { 00:06:52.741 "params": { 00:06:52.741 "name": "Nvme$subsystem", 00:06:52.741 "trtype": "$TEST_TRANSPORT", 00:06:52.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:52.741 "adrfam": "ipv4", 00:06:52.741 "trsvcid": "$NVMF_PORT", 00:06:52.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:52.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:52.741 "hdgst": ${hdgst:-false}, 00:06:52.741 "ddgst": ${ddgst:-false} 00:06:52.741 }, 00:06:52.741 "method": "bdev_nvme_attach_controller" 00:06:52.741 } 00:06:52.741 EOF 00:06:52.741 )") 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:06:52.741 09:29:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:06:52.741 "params": { 00:06:52.741 "name": "Nvme0", 00:06:52.741 "trtype": "tcp", 00:06:52.741 "traddr": "10.0.0.3", 00:06:52.741 "adrfam": "ipv4", 00:06:52.741 "trsvcid": "4420", 00:06:52.741 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:52.741 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:52.741 "hdgst": false, 00:06:52.741 "ddgst": false 00:06:52.741 }, 00:06:52.741 "method": "bdev_nvme_attach_controller" 00:06:52.741 }' 00:06:52.741 [2024-11-05 09:29:38.655158] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:52.741 [2024-11-05 09:29:38.655799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62255 ] 00:06:53.000 [2024-11-05 09:29:38.805692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.000 [2024-11-05 09:29:38.839060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.000 [2024-11-05 09:29:38.877924] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.258 Running I/O for 1 seconds... 00:06:54.191 1472.00 IOPS, 92.00 MiB/s 00:06:54.191 Latency(us) 00:06:54.191 [2024-11-05T09:29:40.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:54.191 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:54.191 Verification LBA range: start 0x0 length 0x400 00:06:54.191 Nvme0n1 : 1.01 1521.97 95.12 0.00 0.00 41207.64 3961.95 38606.66 00:06:54.191 [2024-11-05T09:29:40.149Z] =================================================================================================================== 00:06:54.191 [2024-11-05T09:29:40.149Z] Total : 1521.97 95.12 0.00 0.00 41207.64 3961.95 38606.66 00:06:54.191 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:54.191 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:54.191 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:06:54.191 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:06:54.191 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:54.191 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:54.191 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:54.451 rmmod nvme_tcp 00:06:54.451 rmmod nvme_fabrics 00:06:54.451 rmmod nvme_keyring 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 62163 ']' 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 62163 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 62163 ']' 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 62163 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62163 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:06:54.451 killing process with pid 62163 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62163' 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 62163 00:06:54.451 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 62163 00:06:54.451 [2024-11-05 09:29:40.392518] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:54.710 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:54.969 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:54.970 00:06:54.970 real 0m5.186s 00:06:54.970 user 0m17.995s 00:06:54.970 sys 0m1.383s 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:54.970 ************************************ 00:06:54.970 END TEST nvmf_host_management 00:06:54.970 ************************************ 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:54.970 ************************************ 00:06:54.970 START TEST nvmf_lvol 00:06:54.970 ************************************ 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:54.970 * Looking for test storage... 00:06:54.970 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.970 --rc genhtml_branch_coverage=1 00:06:54.970 --rc genhtml_function_coverage=1 00:06:54.970 --rc genhtml_legend=1 00:06:54.970 --rc geninfo_all_blocks=1 00:06:54.970 --rc geninfo_unexecuted_blocks=1 00:06:54.970 00:06:54.970 ' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.970 --rc genhtml_branch_coverage=1 00:06:54.970 --rc genhtml_function_coverage=1 00:06:54.970 --rc genhtml_legend=1 00:06:54.970 --rc geninfo_all_blocks=1 00:06:54.970 --rc geninfo_unexecuted_blocks=1 00:06:54.970 00:06:54.970 ' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.970 --rc genhtml_branch_coverage=1 00:06:54.970 --rc genhtml_function_coverage=1 00:06:54.970 --rc genhtml_legend=1 00:06:54.970 --rc geninfo_all_blocks=1 00:06:54.970 --rc geninfo_unexecuted_blocks=1 00:06:54.970 00:06:54.970 ' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:54.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.970 --rc genhtml_branch_coverage=1 00:06:54.970 --rc genhtml_function_coverage=1 00:06:54.970 --rc genhtml_legend=1 00:06:54.970 --rc geninfo_all_blocks=1 00:06:54.970 --rc geninfo_unexecuted_blocks=1 00:06:54.970 00:06:54.970 ' 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.970 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.230 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.231 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:06:55.231 Cannot find device "nvmf_init_br" 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:06:55.231 Cannot find device "nvmf_init_br2" 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:06:55.231 Cannot find device "nvmf_tgt_br" 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:06:55.231 09:29:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:06:55.231 Cannot find device "nvmf_tgt_br2" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:06:55.231 Cannot find device "nvmf_init_br" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:06:55.231 Cannot find device "nvmf_init_br2" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:06:55.231 Cannot find device "nvmf_tgt_br" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:06:55.231 Cannot find device "nvmf_tgt_br2" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:06:55.231 Cannot find device "nvmf_br" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:06:55.231 Cannot find device "nvmf_init_if" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:06:55.231 Cannot find device "nvmf_init_if2" 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:06:55.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:06:55.231 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:06:55.231 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:06:55.232 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:06:55.232 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:06:55.490 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:06:55.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:06:55.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:06:55.491 00:06:55.491 --- 10.0.0.3 ping statistics --- 00:06:55.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.491 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:06:55.491 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:06:55.491 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:06:55.491 00:06:55.491 --- 10.0.0.4 ping statistics --- 00:06:55.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.491 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:06:55.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:55.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:06:55.491 00:06:55.491 --- 10.0.0.1 ping statistics --- 00:06:55.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.491 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:06:55.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:55.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:06:55.491 00:06:55.491 --- 10.0.0.2 ping statistics --- 00:06:55.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:55.491 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=62514 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 62514 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 62514 ']' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.491 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:55.491 [2024-11-05 09:29:41.424335] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:06:55.491 [2024-11-05 09:29:41.424426] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:55.750 [2024-11-05 09:29:41.577406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.750 [2024-11-05 09:29:41.616173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:55.750 [2024-11-05 09:29:41.616420] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:55.750 [2024-11-05 09:29:41.616628] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:55.750 [2024-11-05 09:29:41.616786] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:55.750 [2024-11-05 09:29:41.616855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:55.750 [2024-11-05 09:29:41.617907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.750 [2024-11-05 09:29:41.617969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.750 [2024-11-05 09:29:41.617968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:55.750 [2024-11-05 09:29:41.652235] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.008 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.008 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:06:56.008 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:56.008 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:56.008 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:56.008 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:56.008 09:29:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:56.266 [2024-11-05 09:29:42.052032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:56.266 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:56.525 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:56.525 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:56.783 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:56.784 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:57.042 09:29:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:57.612 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=da9be796-2ec4-49cc-bf6f-82a4b56d987b 00:06:57.612 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u da9be796-2ec4-49cc-bf6f-82a4b56d987b lvol 20 00:06:57.612 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=145211b6-7ee0-4454-8428-acddca98b4ad 00:06:57.612 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:58.179 09:29:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 145211b6-7ee0-4454-8428-acddca98b4ad 00:06:58.179 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:06:58.437 [2024-11-05 09:29:44.329064] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:06:58.437 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:06:58.695 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=62588 00:06:58.695 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:58.696 09:29:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:00.069 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 145211b6-7ee0-4454-8428-acddca98b4ad MY_SNAPSHOT 00:07:00.069 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f51d5815-0aa2-4b29-9501-09af30718598 00:07:00.070 09:29:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 145211b6-7ee0-4454-8428-acddca98b4ad 30 00:07:00.328 09:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f51d5815-0aa2-4b29-9501-09af30718598 MY_CLONE 00:07:00.895 09:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=9aa273bc-c535-4ec5-9504-b10ad4a1bce5 00:07:00.895 09:29:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 9aa273bc-c535-4ec5-9504-b10ad4a1bce5 00:07:01.153 09:29:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 62588 00:07:09.267 Initializing NVMe Controllers 00:07:09.267 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:07:09.267 Controller IO queue size 128, less than required. 00:07:09.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:09.267 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:09.267 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:09.267 Initialization complete. Launching workers. 00:07:09.267 ======================================================== 00:07:09.267 Latency(us) 00:07:09.267 Device Information : IOPS MiB/s Average min max 00:07:09.267 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10537.49 41.16 12157.77 3516.35 84807.56 00:07:09.267 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10512.29 41.06 12185.55 3285.32 46085.48 00:07:09.267 ======================================================== 00:07:09.267 Total : 21049.77 82.23 12171.64 3285.32 84807.56 00:07:09.267 00:07:09.267 09:29:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:09.584 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 145211b6-7ee0-4454-8428-acddca98b4ad 00:07:09.584 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u da9be796-2ec4-49cc-bf6f-82a4b56d987b 00:07:09.842 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:09.843 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:09.843 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:09.843 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:09.843 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:10.101 rmmod nvme_tcp 00:07:10.101 rmmod nvme_fabrics 00:07:10.101 rmmod nvme_keyring 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 62514 ']' 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 62514 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 62514 ']' 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 62514 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62514 00:07:10.101 killing process with pid 62514 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62514' 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 62514 00:07:10.101 09:29:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 62514 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.361 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:07:10.620 00:07:10.620 real 0m15.594s 00:07:10.620 user 1m4.504s 00:07:10.620 sys 0m4.254s 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:10.620 ************************************ 00:07:10.620 END TEST nvmf_lvol 00:07:10.620 ************************************ 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:10.620 ************************************ 00:07:10.620 START TEST nvmf_lvs_grow 00:07:10.620 ************************************ 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:10.620 * Looking for test storage... 00:07:10.620 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.620 --rc genhtml_branch_coverage=1 00:07:10.620 --rc genhtml_function_coverage=1 00:07:10.620 --rc genhtml_legend=1 00:07:10.620 --rc geninfo_all_blocks=1 00:07:10.620 --rc geninfo_unexecuted_blocks=1 00:07:10.620 00:07:10.620 ' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.620 --rc genhtml_branch_coverage=1 00:07:10.620 --rc genhtml_function_coverage=1 00:07:10.620 --rc genhtml_legend=1 00:07:10.620 --rc geninfo_all_blocks=1 00:07:10.620 --rc geninfo_unexecuted_blocks=1 00:07:10.620 00:07:10.620 ' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.620 --rc genhtml_branch_coverage=1 00:07:10.620 --rc genhtml_function_coverage=1 00:07:10.620 --rc genhtml_legend=1 00:07:10.620 --rc geninfo_all_blocks=1 00:07:10.620 --rc geninfo_unexecuted_blocks=1 00:07:10.620 00:07:10.620 ' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:10.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:10.620 --rc genhtml_branch_coverage=1 00:07:10.620 --rc genhtml_function_coverage=1 00:07:10.620 --rc genhtml_legend=1 00:07:10.620 --rc geninfo_all_blocks=1 00:07:10.620 --rc geninfo_unexecuted_blocks=1 00:07:10.620 00:07:10.620 ' 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:10.620 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:10.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:10.880 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:10.881 Cannot find device "nvmf_init_br" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:10.881 Cannot find device "nvmf_init_br2" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:10.881 Cannot find device "nvmf_tgt_br" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:10.881 Cannot find device "nvmf_tgt_br2" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:10.881 Cannot find device "nvmf_init_br" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:10.881 Cannot find device "nvmf_init_br2" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:10.881 Cannot find device "nvmf_tgt_br" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:10.881 Cannot find device "nvmf_tgt_br2" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:10.881 Cannot find device "nvmf_br" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:10.881 Cannot find device "nvmf_init_if" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:10.881 Cannot find device "nvmf_init_if2" 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:10.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:10.881 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:10.881 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:11.153 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:11.153 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:07:11.153 00:07:11.153 --- 10.0.0.3 ping statistics --- 00:07:11.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.153 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:11.153 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:11.153 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.031 ms 00:07:11.153 00:07:11.153 --- 10.0.0.4 ping statistics --- 00:07:11.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.153 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:11.153 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.153 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.046 ms 00:07:11.153 00:07:11.153 --- 10.0.0.1 ping statistics --- 00:07:11.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.153 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:07:11.153 09:29:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:11.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:07:11.153 00:07:11.153 --- 10.0.0.2 ping statistics --- 00:07:11.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.153 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=62964 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 62964 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 62964 ']' 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.153 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.153 [2024-11-05 09:29:57.091862] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:11.153 [2024-11-05 09:29:57.091956] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.446 [2024-11-05 09:29:57.237970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.446 [2024-11-05 09:29:57.267615] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.446 [2024-11-05 09:29:57.267700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.446 [2024-11-05 09:29:57.267717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:11.446 [2024-11-05 09:29:57.267725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:11.446 [2024-11-05 09:29:57.267732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.446 [2024-11-05 09:29:57.268090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.446 [2024-11-05 09:29:57.297692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.446 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:11.446 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:07:11.446 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:11.446 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:11.446 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:11.446 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:11.446 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:12.020 [2024-11-05 09:29:57.682741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:12.020 ************************************ 00:07:12.020 START TEST lvs_grow_clean 00:07:12.020 ************************************ 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:12.020 09:29:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:12.279 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:12.279 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:12.538 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:12.538 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:12.538 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:12.797 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:12.797 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:12.797 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa lvol 150 00:07:13.056 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3349f6ba-09d6-4fa3-b4e9-e3c361c67df6 00:07:13.056 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:13.056 09:29:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:13.315 [2024-11-05 09:29:59.103695] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:13.315 [2024-11-05 09:29:59.103786] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:13.315 true 00:07:13.315 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:13.315 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:13.574 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:13.574 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:13.834 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3349f6ba-09d6-4fa3-b4e9-e3c361c67df6 00:07:14.093 09:29:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:14.352 [2024-11-05 09:30:00.200335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:14.352 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63039 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63039 /var/tmp/bdevperf.sock 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 63039 ']' 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:14.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:14.611 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:14.611 [2024-11-05 09:30:00.539036] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:14.612 [2024-11-05 09:30:00.539125] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63039 ] 00:07:14.870 [2024-11-05 09:30:00.681579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.870 [2024-11-05 09:30:00.713307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.870 [2024-11-05 09:30:00.743398] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.129 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:15.129 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:07:15.129 09:30:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:15.388 Nvme0n1 00:07:15.388 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:15.647 [ 00:07:15.647 { 00:07:15.647 "name": "Nvme0n1", 00:07:15.647 "aliases": [ 00:07:15.647 "3349f6ba-09d6-4fa3-b4e9-e3c361c67df6" 00:07:15.647 ], 00:07:15.647 "product_name": "NVMe disk", 00:07:15.647 "block_size": 4096, 00:07:15.647 "num_blocks": 38912, 00:07:15.647 "uuid": "3349f6ba-09d6-4fa3-b4e9-e3c361c67df6", 00:07:15.647 "numa_id": -1, 00:07:15.647 "assigned_rate_limits": { 00:07:15.647 "rw_ios_per_sec": 0, 00:07:15.647 "rw_mbytes_per_sec": 0, 00:07:15.647 "r_mbytes_per_sec": 0, 00:07:15.647 "w_mbytes_per_sec": 0 00:07:15.647 }, 00:07:15.647 "claimed": false, 00:07:15.647 "zoned": false, 00:07:15.647 "supported_io_types": { 00:07:15.647 "read": true, 00:07:15.647 "write": true, 00:07:15.647 "unmap": true, 00:07:15.647 "flush": true, 00:07:15.647 "reset": true, 00:07:15.647 "nvme_admin": true, 00:07:15.647 "nvme_io": true, 00:07:15.647 "nvme_io_md": false, 00:07:15.647 "write_zeroes": true, 00:07:15.647 "zcopy": false, 00:07:15.647 "get_zone_info": false, 00:07:15.647 "zone_management": false, 00:07:15.647 "zone_append": false, 00:07:15.647 "compare": true, 00:07:15.647 "compare_and_write": true, 00:07:15.647 "abort": true, 00:07:15.647 "seek_hole": false, 00:07:15.647 "seek_data": false, 00:07:15.647 "copy": true, 00:07:15.647 "nvme_iov_md": false 00:07:15.647 }, 00:07:15.647 "memory_domains": [ 00:07:15.647 { 00:07:15.647 "dma_device_id": "system", 00:07:15.647 "dma_device_type": 1 00:07:15.647 } 00:07:15.647 ], 00:07:15.647 "driver_specific": { 00:07:15.647 "nvme": [ 00:07:15.647 { 00:07:15.647 "trid": { 00:07:15.647 "trtype": "TCP", 00:07:15.647 "adrfam": "IPv4", 00:07:15.647 "traddr": "10.0.0.3", 00:07:15.647 "trsvcid": "4420", 00:07:15.647 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:15.647 }, 00:07:15.647 "ctrlr_data": { 00:07:15.647 "cntlid": 1, 00:07:15.647 "vendor_id": "0x8086", 00:07:15.647 "model_number": "SPDK bdev Controller", 00:07:15.647 "serial_number": "SPDK0", 00:07:15.647 "firmware_revision": "25.01", 00:07:15.647 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:15.647 "oacs": { 00:07:15.647 "security": 0, 00:07:15.647 "format": 0, 00:07:15.647 "firmware": 0, 00:07:15.647 "ns_manage": 0 00:07:15.647 }, 00:07:15.647 "multi_ctrlr": true, 00:07:15.647 "ana_reporting": false 00:07:15.647 }, 00:07:15.647 "vs": { 00:07:15.647 "nvme_version": "1.3" 00:07:15.647 }, 00:07:15.647 "ns_data": { 00:07:15.647 "id": 1, 00:07:15.647 "can_share": true 00:07:15.647 } 00:07:15.647 } 00:07:15.647 ], 00:07:15.647 "mp_policy": "active_passive" 00:07:15.647 } 00:07:15.647 } 00:07:15.647 ] 00:07:15.647 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:15.647 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63055 00:07:15.647 09:30:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:15.647 Running I/O for 10 seconds... 00:07:16.585 Latency(us) 00:07:16.585 [2024-11-05T09:30:02.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:16.585 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.585 Nvme0n1 : 1.00 6691.00 26.14 0.00 0.00 0.00 0.00 0.00 00:07:16.585 [2024-11-05T09:30:02.543Z] =================================================================================================================== 00:07:16.585 [2024-11-05T09:30:02.543Z] Total : 6691.00 26.14 0.00 0.00 0.00 0.00 0.00 00:07:16.585 00:07:17.522 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:17.781 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.781 Nvme0n1 : 2.00 6584.00 25.72 0.00 0.00 0.00 0.00 0.00 00:07:17.781 [2024-11-05T09:30:03.739Z] =================================================================================================================== 00:07:17.781 [2024-11-05T09:30:03.739Z] Total : 6584.00 25.72 0.00 0.00 0.00 0.00 0.00 00:07:17.781 00:07:18.039 true 00:07:18.039 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:18.039 09:30:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:18.297 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:18.297 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:18.297 09:30:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 63055 00:07:18.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.864 Nvme0n1 : 3.00 6590.67 25.74 0.00 0.00 0.00 0.00 0.00 00:07:18.864 [2024-11-05T09:30:04.823Z] =================================================================================================================== 00:07:18.865 [2024-11-05T09:30:04.823Z] Total : 6590.67 25.74 0.00 0.00 0.00 0.00 0.00 00:07:18.865 00:07:19.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.800 Nvme0n1 : 4.00 6456.00 25.22 0.00 0.00 0.00 0.00 0.00 00:07:19.800 [2024-11-05T09:30:05.758Z] =================================================================================================================== 00:07:19.800 [2024-11-05T09:30:05.758Z] Total : 6456.00 25.22 0.00 0.00 0.00 0.00 0.00 00:07:19.800 00:07:20.736 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.736 Nvme0n1 : 5.00 6434.80 25.14 0.00 0.00 0.00 0.00 0.00 00:07:20.736 [2024-11-05T09:30:06.694Z] =================================================================================================================== 00:07:20.736 [2024-11-05T09:30:06.694Z] Total : 6434.80 25.14 0.00 0.00 0.00 0.00 0.00 00:07:20.736 00:07:21.672 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.672 Nvme0n1 : 6.00 6420.67 25.08 0.00 0.00 0.00 0.00 0.00 00:07:21.672 [2024-11-05T09:30:07.630Z] =================================================================================================================== 00:07:21.672 [2024-11-05T09:30:07.630Z] Total : 6420.67 25.08 0.00 0.00 0.00 0.00 0.00 00:07:21.672 00:07:22.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.609 Nvme0n1 : 7.00 6410.57 25.04 0.00 0.00 0.00 0.00 0.00 00:07:22.609 [2024-11-05T09:30:08.567Z] =================================================================================================================== 00:07:22.609 [2024-11-05T09:30:08.567Z] Total : 6410.57 25.04 0.00 0.00 0.00 0.00 0.00 00:07:22.609 00:07:23.985 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.985 Nvme0n1 : 8.00 6371.25 24.89 0.00 0.00 0.00 0.00 0.00 00:07:23.985 [2024-11-05T09:30:09.943Z] =================================================================================================================== 00:07:23.985 [2024-11-05T09:30:09.943Z] Total : 6371.25 24.89 0.00 0.00 0.00 0.00 0.00 00:07:23.985 00:07:24.921 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.921 Nvme0n1 : 9.00 6340.67 24.77 0.00 0.00 0.00 0.00 0.00 00:07:24.921 [2024-11-05T09:30:10.879Z] =================================================================================================================== 00:07:24.921 [2024-11-05T09:30:10.879Z] Total : 6340.67 24.77 0.00 0.00 0.00 0.00 0.00 00:07:24.921 00:07:25.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.859 Nvme0n1 : 10.00 6316.20 24.67 0.00 0.00 0.00 0.00 0.00 00:07:25.859 [2024-11-05T09:30:11.817Z] =================================================================================================================== 00:07:25.859 [2024-11-05T09:30:11.817Z] Total : 6316.20 24.67 0.00 0.00 0.00 0.00 0.00 00:07:25.859 00:07:25.859 00:07:25.859 Latency(us) 00:07:25.859 [2024-11-05T09:30:11.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:25.859 Nvme0n1 : 10.01 6322.07 24.70 0.00 0.00 20240.72 11796.48 99138.09 00:07:25.859 [2024-11-05T09:30:11.817Z] =================================================================================================================== 00:07:25.859 [2024-11-05T09:30:11.817Z] Total : 6322.07 24.70 0.00 0.00 20240.72 11796.48 99138.09 00:07:25.859 { 00:07:25.859 "results": [ 00:07:25.859 { 00:07:25.859 "job": "Nvme0n1", 00:07:25.859 "core_mask": "0x2", 00:07:25.859 "workload": "randwrite", 00:07:25.859 "status": "finished", 00:07:25.859 "queue_depth": 128, 00:07:25.859 "io_size": 4096, 00:07:25.859 "runtime": 10.010969, 00:07:25.859 "iops": 6322.065326543315, 00:07:25.859 "mibps": 24.695567681809823, 00:07:25.859 "io_failed": 0, 00:07:25.859 "io_timeout": 0, 00:07:25.859 "avg_latency_us": 20240.72474008532, 00:07:25.859 "min_latency_us": 11796.48, 00:07:25.859 "max_latency_us": 99138.09454545454 00:07:25.859 } 00:07:25.859 ], 00:07:25.859 "core_count": 1 00:07:25.859 } 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63039 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 63039 ']' 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 63039 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63039 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:25.859 killing process with pid 63039 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63039' 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 63039 00:07:25.859 Received shutdown signal, test time was about 10.000000 seconds 00:07:25.859 00:07:25.859 Latency(us) 00:07:25.859 [2024-11-05T09:30:11.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:25.859 [2024-11-05T09:30:11.817Z] =================================================================================================================== 00:07:25.859 [2024-11-05T09:30:11.817Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 63039 00:07:25.859 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:26.118 09:30:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:26.377 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:26.377 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:26.636 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:26.636 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:26.636 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:26.895 [2024-11-05 09:30:12.762480] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:26.895 09:30:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:27.156 request: 00:07:27.156 { 00:07:27.156 "uuid": "2918a0e3-c40a-4387-a492-4e3d0e1fc1aa", 00:07:27.156 "method": "bdev_lvol_get_lvstores", 00:07:27.156 "req_id": 1 00:07:27.156 } 00:07:27.156 Got JSON-RPC error response 00:07:27.156 response: 00:07:27.156 { 00:07:27.156 "code": -19, 00:07:27.156 "message": "No such device" 00:07:27.156 } 00:07:27.156 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:27.156 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:27.156 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:27.156 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:27.156 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:27.478 aio_bdev 00:07:27.478 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3349f6ba-09d6-4fa3-b4e9-e3c361c67df6 00:07:27.478 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=3349f6ba-09d6-4fa3-b4e9-e3c361c67df6 00:07:27.478 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:27.478 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:07:27.478 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:27.478 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:27.478 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:27.746 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3349f6ba-09d6-4fa3-b4e9-e3c361c67df6 -t 2000 00:07:28.006 [ 00:07:28.006 { 00:07:28.006 "name": "3349f6ba-09d6-4fa3-b4e9-e3c361c67df6", 00:07:28.006 "aliases": [ 00:07:28.006 "lvs/lvol" 00:07:28.006 ], 00:07:28.006 "product_name": "Logical Volume", 00:07:28.006 "block_size": 4096, 00:07:28.006 "num_blocks": 38912, 00:07:28.006 "uuid": "3349f6ba-09d6-4fa3-b4e9-e3c361c67df6", 00:07:28.006 "assigned_rate_limits": { 00:07:28.006 "rw_ios_per_sec": 0, 00:07:28.006 "rw_mbytes_per_sec": 0, 00:07:28.006 "r_mbytes_per_sec": 0, 00:07:28.006 "w_mbytes_per_sec": 0 00:07:28.006 }, 00:07:28.006 "claimed": false, 00:07:28.006 "zoned": false, 00:07:28.006 "supported_io_types": { 00:07:28.006 "read": true, 00:07:28.006 "write": true, 00:07:28.006 "unmap": true, 00:07:28.006 "flush": false, 00:07:28.006 "reset": true, 00:07:28.006 "nvme_admin": false, 00:07:28.006 "nvme_io": false, 00:07:28.006 "nvme_io_md": false, 00:07:28.006 "write_zeroes": true, 00:07:28.006 "zcopy": false, 00:07:28.006 "get_zone_info": false, 00:07:28.006 "zone_management": false, 00:07:28.006 "zone_append": false, 00:07:28.006 "compare": false, 00:07:28.006 "compare_and_write": false, 00:07:28.006 "abort": false, 00:07:28.006 "seek_hole": true, 00:07:28.006 "seek_data": true, 00:07:28.006 "copy": false, 00:07:28.006 "nvme_iov_md": false 00:07:28.006 }, 00:07:28.006 "driver_specific": { 00:07:28.006 "lvol": { 00:07:28.006 "lvol_store_uuid": "2918a0e3-c40a-4387-a492-4e3d0e1fc1aa", 00:07:28.006 "base_bdev": "aio_bdev", 00:07:28.006 "thin_provision": false, 00:07:28.006 "num_allocated_clusters": 38, 00:07:28.006 "snapshot": false, 00:07:28.006 "clone": false, 00:07:28.006 "esnap_clone": false 00:07:28.006 } 00:07:28.006 } 00:07:28.006 } 00:07:28.006 ] 00:07:28.006 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:07:28.006 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:28.006 09:30:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:28.265 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:28.265 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:28.265 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:28.524 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:28.524 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 3349f6ba-09d6-4fa3-b4e9-e3c361c67df6 00:07:28.784 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2918a0e3-c40a-4387-a492-4e3d0e1fc1aa 00:07:29.045 09:30:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:29.304 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:29.563 ************************************ 00:07:29.563 END TEST lvs_grow_clean 00:07:29.563 ************************************ 00:07:29.563 00:07:29.563 real 0m17.666s 00:07:29.563 user 0m16.677s 00:07:29.563 sys 0m2.352s 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:29.563 ************************************ 00:07:29.563 START TEST lvs_grow_dirty 00:07:29.563 ************************************ 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:29.563 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:29.564 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:29.823 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:29.823 09:30:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:30.391 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:30.391 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:30.391 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:30.391 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:30.391 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:30.391 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 lvol 150 00:07:30.649 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=14bf9d7d-c5cf-4e66-8fce-54576ba22999 00:07:30.649 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:30.649 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:30.909 [2024-11-05 09:30:16.784712] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:30.909 [2024-11-05 09:30:16.784794] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:30.909 true 00:07:30.909 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:30.909 09:30:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:31.168 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:31.168 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.426 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14bf9d7d-c5cf-4e66-8fce-54576ba22999 00:07:31.685 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:07:31.943 [2024-11-05 09:30:17.825384] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:31.943 09:30:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=63304 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 63304 /var/tmp/bdevperf.sock 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63304 ']' 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:32.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:32.201 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:32.201 [2024-11-05 09:30:18.119714] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:32.201 [2024-11-05 09:30:18.119798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63304 ] 00:07:32.459 [2024-11-05 09:30:18.262776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.459 [2024-11-05 09:30:18.292765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:32.459 [2024-11-05 09:30:18.320930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.459 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:32.459 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:32.459 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:33.026 Nvme0n1 00:07:33.026 09:30:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:33.285 [ 00:07:33.285 { 00:07:33.285 "name": "Nvme0n1", 00:07:33.285 "aliases": [ 00:07:33.285 "14bf9d7d-c5cf-4e66-8fce-54576ba22999" 00:07:33.285 ], 00:07:33.285 "product_name": "NVMe disk", 00:07:33.285 "block_size": 4096, 00:07:33.285 "num_blocks": 38912, 00:07:33.285 "uuid": "14bf9d7d-c5cf-4e66-8fce-54576ba22999", 00:07:33.285 "numa_id": -1, 00:07:33.285 "assigned_rate_limits": { 00:07:33.285 "rw_ios_per_sec": 0, 00:07:33.285 "rw_mbytes_per_sec": 0, 00:07:33.285 "r_mbytes_per_sec": 0, 00:07:33.285 "w_mbytes_per_sec": 0 00:07:33.285 }, 00:07:33.285 "claimed": false, 00:07:33.285 "zoned": false, 00:07:33.285 "supported_io_types": { 00:07:33.285 "read": true, 00:07:33.285 "write": true, 00:07:33.285 "unmap": true, 00:07:33.285 "flush": true, 00:07:33.285 "reset": true, 00:07:33.285 "nvme_admin": true, 00:07:33.285 "nvme_io": true, 00:07:33.285 "nvme_io_md": false, 00:07:33.285 "write_zeroes": true, 00:07:33.285 "zcopy": false, 00:07:33.285 "get_zone_info": false, 00:07:33.285 "zone_management": false, 00:07:33.285 "zone_append": false, 00:07:33.285 "compare": true, 00:07:33.285 "compare_and_write": true, 00:07:33.285 "abort": true, 00:07:33.285 "seek_hole": false, 00:07:33.285 "seek_data": false, 00:07:33.285 "copy": true, 00:07:33.285 "nvme_iov_md": false 00:07:33.285 }, 00:07:33.285 "memory_domains": [ 00:07:33.285 { 00:07:33.285 "dma_device_id": "system", 00:07:33.285 "dma_device_type": 1 00:07:33.285 } 00:07:33.285 ], 00:07:33.285 "driver_specific": { 00:07:33.285 "nvme": [ 00:07:33.285 { 00:07:33.285 "trid": { 00:07:33.285 "trtype": "TCP", 00:07:33.285 "adrfam": "IPv4", 00:07:33.285 "traddr": "10.0.0.3", 00:07:33.285 "trsvcid": "4420", 00:07:33.285 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:33.285 }, 00:07:33.285 "ctrlr_data": { 00:07:33.285 "cntlid": 1, 00:07:33.285 "vendor_id": "0x8086", 00:07:33.285 "model_number": "SPDK bdev Controller", 00:07:33.285 "serial_number": "SPDK0", 00:07:33.285 "firmware_revision": "25.01", 00:07:33.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:33.285 "oacs": { 00:07:33.285 "security": 0, 00:07:33.285 "format": 0, 00:07:33.285 "firmware": 0, 00:07:33.285 "ns_manage": 0 00:07:33.285 }, 00:07:33.285 "multi_ctrlr": true, 00:07:33.285 "ana_reporting": false 00:07:33.285 }, 00:07:33.285 "vs": { 00:07:33.285 "nvme_version": "1.3" 00:07:33.285 }, 00:07:33.285 "ns_data": { 00:07:33.285 "id": 1, 00:07:33.285 "can_share": true 00:07:33.285 } 00:07:33.285 } 00:07:33.285 ], 00:07:33.285 "mp_policy": "active_passive" 00:07:33.285 } 00:07:33.285 } 00:07:33.285 ] 00:07:33.285 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=63320 00:07:33.285 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:33.285 09:30:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:33.285 Running I/O for 10 seconds... 00:07:34.237 Latency(us) 00:07:34.237 [2024-11-05T09:30:20.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:34.237 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.237 Nvme0n1 : 1.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:34.237 [2024-11-05T09:30:20.195Z] =================================================================================================================== 00:07:34.237 [2024-11-05T09:30:20.195Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:34.237 00:07:35.173 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:35.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:35.173 Nvme0n1 : 2.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:35.173 [2024-11-05T09:30:21.131Z] =================================================================================================================== 00:07:35.173 [2024-11-05T09:30:21.131Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:35.173 00:07:35.432 true 00:07:35.432 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:35.432 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:35.999 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:35.999 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:35.999 09:30:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 63320 00:07:36.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.258 Nvme0n1 : 3.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:36.258 [2024-11-05T09:30:22.216Z] =================================================================================================================== 00:07:36.258 [2024-11-05T09:30:22.216Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:36.258 00:07:37.195 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.195 Nvme0n1 : 4.00 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:37.195 [2024-11-05T09:30:23.153Z] =================================================================================================================== 00:07:37.195 [2024-11-05T09:30:23.153Z] Total : 6477.00 25.30 0.00 0.00 0.00 0.00 0.00 00:07:37.195 00:07:38.573 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.573 Nvme0n1 : 5.00 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:07:38.573 [2024-11-05T09:30:24.531Z] =================================================================================================================== 00:07:38.573 [2024-11-05T09:30:24.531Z] Total : 6426.20 25.10 0.00 0.00 0.00 0.00 0.00 00:07:38.573 00:07:39.510 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.510 Nvme0n1 : 6.00 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:07:39.510 [2024-11-05T09:30:25.468Z] =================================================================================================================== 00:07:39.510 [2024-11-05T09:30:25.468Z] Total : 6455.83 25.22 0.00 0.00 0.00 0.00 0.00 00:07:39.510 00:07:40.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.446 Nvme0n1 : 7.00 6440.71 25.16 0.00 0.00 0.00 0.00 0.00 00:07:40.446 [2024-11-05T09:30:26.404Z] =================================================================================================================== 00:07:40.446 [2024-11-05T09:30:26.404Z] Total : 6440.71 25.16 0.00 0.00 0.00 0.00 0.00 00:07:40.446 00:07:41.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.383 Nvme0n1 : 8.00 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:41.383 [2024-11-05T09:30:27.341Z] =================================================================================================================== 00:07:41.383 [2024-11-05T09:30:27.341Z] Total : 6413.50 25.05 0.00 0.00 0.00 0.00 0.00 00:07:41.383 00:07:42.319 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.319 Nvme0n1 : 9.00 6227.00 24.32 0.00 0.00 0.00 0.00 0.00 00:07:42.319 [2024-11-05T09:30:28.277Z] =================================================================================================================== 00:07:42.319 [2024-11-05T09:30:28.277Z] Total : 6227.00 24.32 0.00 0.00 0.00 0.00 0.00 00:07:42.319 00:07:43.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.254 Nvme0n1 : 10.00 6226.60 24.32 0.00 0.00 0.00 0.00 0.00 00:07:43.254 [2024-11-05T09:30:29.212Z] =================================================================================================================== 00:07:43.254 [2024-11-05T09:30:29.212Z] Total : 6226.60 24.32 0.00 0.00 0.00 0.00 0.00 00:07:43.254 00:07:43.254 00:07:43.254 Latency(us) 00:07:43.255 [2024-11-05T09:30:29.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.255 Nvme0n1 : 10.00 6223.77 24.31 0.00 0.00 20557.56 13285.93 301227.29 00:07:43.255 [2024-11-05T09:30:29.213Z] =================================================================================================================== 00:07:43.255 [2024-11-05T09:30:29.213Z] Total : 6223.77 24.31 0.00 0.00 20557.56 13285.93 301227.29 00:07:43.255 { 00:07:43.255 "results": [ 00:07:43.255 { 00:07:43.255 "job": "Nvme0n1", 00:07:43.255 "core_mask": "0x2", 00:07:43.255 "workload": "randwrite", 00:07:43.255 "status": "finished", 00:07:43.255 "queue_depth": 128, 00:07:43.255 "io_size": 4096, 00:07:43.255 "runtime": 10.0047, 00:07:43.255 "iops": 6223.774825831859, 00:07:43.255 "mibps": 24.3116204134057, 00:07:43.255 "io_failed": 0, 00:07:43.255 "io_timeout": 0, 00:07:43.255 "avg_latency_us": 20557.563727583707, 00:07:43.255 "min_latency_us": 13285.934545454546, 00:07:43.255 "max_latency_us": 301227.2872727273 00:07:43.255 } 00:07:43.255 ], 00:07:43.255 "core_count": 1 00:07:43.255 } 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 63304 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 63304 ']' 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 63304 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63304 00:07:43.255 killing process with pid 63304 00:07:43.255 Received shutdown signal, test time was about 10.000000 seconds 00:07:43.255 00:07:43.255 Latency(us) 00:07:43.255 [2024-11-05T09:30:29.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.255 [2024-11-05T09:30:29.213Z] =================================================================================================================== 00:07:43.255 [2024-11-05T09:30:29.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63304' 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 63304 00:07:43.255 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 63304 00:07:43.514 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:07:43.783 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:44.043 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:44.043 09:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 62964 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 62964 00:07:44.302 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 62964 Killed "${NVMF_APP[@]}" "$@" 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=63453 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 63453 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:44.302 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 63453 ']' 00:07:44.303 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.303 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:44.303 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.303 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:44.303 09:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:44.303 [2024-11-05 09:30:30.236817] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:44.303 [2024-11-05 09:30:30.237822] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.561 [2024-11-05 09:30:30.384548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.561 [2024-11-05 09:30:30.414152] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:44.561 [2024-11-05 09:30:30.414247] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:44.561 [2024-11-05 09:30:30.414274] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:44.561 [2024-11-05 09:30:30.414281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:44.561 [2024-11-05 09:30:30.414287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:44.561 [2024-11-05 09:30:30.414560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.561 [2024-11-05 09:30:30.444695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.498 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:45.498 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:07:45.498 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:45.498 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:45.498 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:45.498 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:45.498 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.757 [2024-11-05 09:30:31.528100] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:45.757 [2024-11-05 09:30:31.528351] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:45.757 [2024-11-05 09:30:31.528530] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 14bf9d7d-c5cf-4e66-8fce-54576ba22999 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=14bf9d7d-c5cf-4e66-8fce-54576ba22999 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:45.757 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:46.017 09:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14bf9d7d-c5cf-4e66-8fce-54576ba22999 -t 2000 00:07:46.275 [ 00:07:46.275 { 00:07:46.275 "name": "14bf9d7d-c5cf-4e66-8fce-54576ba22999", 00:07:46.275 "aliases": [ 00:07:46.275 "lvs/lvol" 00:07:46.275 ], 00:07:46.275 "product_name": "Logical Volume", 00:07:46.275 "block_size": 4096, 00:07:46.275 "num_blocks": 38912, 00:07:46.275 "uuid": "14bf9d7d-c5cf-4e66-8fce-54576ba22999", 00:07:46.275 "assigned_rate_limits": { 00:07:46.275 "rw_ios_per_sec": 0, 00:07:46.275 "rw_mbytes_per_sec": 0, 00:07:46.275 "r_mbytes_per_sec": 0, 00:07:46.275 "w_mbytes_per_sec": 0 00:07:46.275 }, 00:07:46.275 "claimed": false, 00:07:46.275 "zoned": false, 00:07:46.275 "supported_io_types": { 00:07:46.275 "read": true, 00:07:46.275 "write": true, 00:07:46.275 "unmap": true, 00:07:46.275 "flush": false, 00:07:46.275 "reset": true, 00:07:46.275 "nvme_admin": false, 00:07:46.275 "nvme_io": false, 00:07:46.275 "nvme_io_md": false, 00:07:46.275 "write_zeroes": true, 00:07:46.275 "zcopy": false, 00:07:46.275 "get_zone_info": false, 00:07:46.275 "zone_management": false, 00:07:46.275 "zone_append": false, 00:07:46.275 "compare": false, 00:07:46.275 "compare_and_write": false, 00:07:46.275 "abort": false, 00:07:46.275 "seek_hole": true, 00:07:46.275 "seek_data": true, 00:07:46.275 "copy": false, 00:07:46.275 "nvme_iov_md": false 00:07:46.275 }, 00:07:46.275 "driver_specific": { 00:07:46.275 "lvol": { 00:07:46.275 "lvol_store_uuid": "25678c61-6c30-441a-bc2b-7c2ef536ac86", 00:07:46.275 "base_bdev": "aio_bdev", 00:07:46.275 "thin_provision": false, 00:07:46.275 "num_allocated_clusters": 38, 00:07:46.275 "snapshot": false, 00:07:46.275 "clone": false, 00:07:46.275 "esnap_clone": false 00:07:46.275 } 00:07:46.275 } 00:07:46.275 } 00:07:46.275 ] 00:07:46.275 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:46.275 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:46.275 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:46.541 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:46.541 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:46.541 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:46.845 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:46.846 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.137 [2024-11-05 09:30:32.833749] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:47.137 09:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:47.395 request: 00:07:47.395 { 00:07:47.395 "uuid": "25678c61-6c30-441a-bc2b-7c2ef536ac86", 00:07:47.395 "method": "bdev_lvol_get_lvstores", 00:07:47.395 "req_id": 1 00:07:47.395 } 00:07:47.395 Got JSON-RPC error response 00:07:47.395 response: 00:07:47.395 { 00:07:47.395 "code": -19, 00:07:47.395 "message": "No such device" 00:07:47.395 } 00:07:47.395 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:47.395 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:47.395 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:47.395 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:47.395 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:47.395 aio_bdev 00:07:47.654 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 14bf9d7d-c5cf-4e66-8fce-54576ba22999 00:07:47.654 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=14bf9d7d-c5cf-4e66-8fce-54576ba22999 00:07:47.654 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:07:47.654 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:07:47.654 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:07:47.654 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:07:47.654 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:47.913 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14bf9d7d-c5cf-4e66-8fce-54576ba22999 -t 2000 00:07:47.913 [ 00:07:47.913 { 00:07:47.913 "name": "14bf9d7d-c5cf-4e66-8fce-54576ba22999", 00:07:47.913 "aliases": [ 00:07:47.913 "lvs/lvol" 00:07:47.913 ], 00:07:47.913 "product_name": "Logical Volume", 00:07:47.913 "block_size": 4096, 00:07:47.913 "num_blocks": 38912, 00:07:47.913 "uuid": "14bf9d7d-c5cf-4e66-8fce-54576ba22999", 00:07:47.913 "assigned_rate_limits": { 00:07:47.913 "rw_ios_per_sec": 0, 00:07:47.913 "rw_mbytes_per_sec": 0, 00:07:47.913 "r_mbytes_per_sec": 0, 00:07:47.913 "w_mbytes_per_sec": 0 00:07:47.913 }, 00:07:47.913 "claimed": false, 00:07:47.913 "zoned": false, 00:07:47.913 "supported_io_types": { 00:07:47.913 "read": true, 00:07:47.913 "write": true, 00:07:47.913 "unmap": true, 00:07:47.913 "flush": false, 00:07:47.913 "reset": true, 00:07:47.913 "nvme_admin": false, 00:07:47.913 "nvme_io": false, 00:07:47.913 "nvme_io_md": false, 00:07:47.913 "write_zeroes": true, 00:07:47.913 "zcopy": false, 00:07:47.913 "get_zone_info": false, 00:07:47.913 "zone_management": false, 00:07:47.913 "zone_append": false, 00:07:47.913 "compare": false, 00:07:47.913 "compare_and_write": false, 00:07:47.913 "abort": false, 00:07:47.913 "seek_hole": true, 00:07:47.913 "seek_data": true, 00:07:47.913 "copy": false, 00:07:47.913 "nvme_iov_md": false 00:07:47.913 }, 00:07:47.913 "driver_specific": { 00:07:47.913 "lvol": { 00:07:47.913 "lvol_store_uuid": "25678c61-6c30-441a-bc2b-7c2ef536ac86", 00:07:47.913 "base_bdev": "aio_bdev", 00:07:47.913 "thin_provision": false, 00:07:47.913 "num_allocated_clusters": 38, 00:07:47.913 "snapshot": false, 00:07:47.913 "clone": false, 00:07:47.913 "esnap_clone": false 00:07:47.913 } 00:07:47.913 } 00:07:47.913 } 00:07:47.913 ] 00:07:47.913 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:07:47.913 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:47.913 09:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:48.480 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:48.480 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:48.480 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:48.480 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:48.480 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 14bf9d7d-c5cf-4e66-8fce-54576ba22999 00:07:48.738 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 25678c61-6c30-441a-bc2b-7c2ef536ac86 00:07:48.997 09:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:49.565 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:07:49.824 ************************************ 00:07:49.824 END TEST lvs_grow_dirty 00:07:49.824 ************************************ 00:07:49.824 00:07:49.824 real 0m20.216s 00:07:49.824 user 0m39.749s 00:07:49.824 sys 0m8.826s 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:49.824 nvmf_trace.0 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:49.824 09:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:07:50.391 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:50.391 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:07:50.391 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:50.391 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:50.391 rmmod nvme_tcp 00:07:50.391 rmmod nvme_fabrics 00:07:50.391 rmmod nvme_keyring 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 63453 ']' 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 63453 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 63453 ']' 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 63453 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63453 00:07:50.649 killing process with pid 63453 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63453' 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 63453 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 63453 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:50.649 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:50.650 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:07:50.909 00:07:50.909 real 0m40.388s 00:07:50.909 user 1m3.388s 00:07:50.909 sys 0m12.359s 00:07:50.909 ************************************ 00:07:50.909 END TEST nvmf_lvs_grow 00:07:50.909 ************************************ 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.909 ************************************ 00:07:50.909 START TEST nvmf_bdev_io_wait 00:07:50.909 ************************************ 00:07:50.909 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:51.168 * Looking for test storage... 00:07:51.168 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:51.168 09:30:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.168 --rc genhtml_branch_coverage=1 00:07:51.168 --rc genhtml_function_coverage=1 00:07:51.168 --rc genhtml_legend=1 00:07:51.168 --rc geninfo_all_blocks=1 00:07:51.168 --rc geninfo_unexecuted_blocks=1 00:07:51.168 00:07:51.168 ' 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:51.168 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:51.169 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:51.169 Cannot find device "nvmf_init_br" 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:51.169 Cannot find device "nvmf_init_br2" 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:51.169 Cannot find device "nvmf_tgt_br" 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:51.169 Cannot find device "nvmf_tgt_br2" 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:51.169 Cannot find device "nvmf_init_br" 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:51.169 Cannot find device "nvmf_init_br2" 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:51.169 Cannot find device "nvmf_tgt_br" 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:07:51.169 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:51.428 Cannot find device "nvmf_tgt_br2" 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:51.428 Cannot find device "nvmf_br" 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:51.428 Cannot find device "nvmf_init_if" 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:51.428 Cannot find device "nvmf_init_if2" 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:51.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:51.428 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:51.428 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:51.688 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:51.688 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:07:51.688 00:07:51.688 --- 10.0.0.3 ping statistics --- 00:07:51.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.688 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:51.688 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:51.688 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.064 ms 00:07:51.688 00:07:51.688 --- 10.0.0.4 ping statistics --- 00:07:51.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.688 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:51.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:51.688 00:07:51.688 --- 10.0.0.1 ping statistics --- 00:07:51.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.688 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:51.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:07:51.688 00:07:51.688 --- 10.0.0.2 ping statistics --- 00:07:51.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.688 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=63828 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 63828 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 63828 ']' 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.688 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.688 [2024-11-05 09:30:37.500139] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:51.688 [2024-11-05 09:30:37.500240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.688 [2024-11-05 09:30:37.647030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.947 [2024-11-05 09:30:37.680578] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.947 [2024-11-05 09:30:37.680640] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.947 [2024-11-05 09:30:37.680650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.947 [2024-11-05 09:30:37.680657] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.947 [2024-11-05 09:30:37.680663] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.947 [2024-11-05 09:30:37.681624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.947 [2024-11-05 09:30:37.682091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.947 [2024-11-05 09:30:37.682201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.947 [2024-11-05 09:30:37.682204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.947 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.948 [2024-11-05 09:30:37.833810] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.948 [2024-11-05 09:30:37.848572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.948 Malloc0 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.948 [2024-11-05 09:30:37.895497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=63861 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=63863 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.948 { 00:07:51.948 "params": { 00:07:51.948 "name": "Nvme$subsystem", 00:07:51.948 "trtype": "$TEST_TRANSPORT", 00:07:51.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.948 "adrfam": "ipv4", 00:07:51.948 "trsvcid": "$NVMF_PORT", 00:07:51.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.948 "hdgst": ${hdgst:-false}, 00:07:51.948 "ddgst": ${ddgst:-false} 00:07:51.948 }, 00:07:51.948 "method": "bdev_nvme_attach_controller" 00:07:51.948 } 00:07:51.948 EOF 00:07:51.948 )") 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=63865 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:51.948 { 00:07:51.948 "params": { 00:07:51.948 "name": "Nvme$subsystem", 00:07:51.948 "trtype": "$TEST_TRANSPORT", 00:07:51.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:51.948 "adrfam": "ipv4", 00:07:51.948 "trsvcid": "$NVMF_PORT", 00:07:51.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:51.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:51.948 "hdgst": ${hdgst:-false}, 00:07:51.948 "ddgst": ${ddgst:-false} 00:07:51.948 }, 00:07:51.948 "method": "bdev_nvme_attach_controller" 00:07:51.948 } 00:07:51.948 EOF 00:07:51.948 )") 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=63867 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:51.948 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.207 { 00:07:52.207 "params": { 00:07:52.207 "name": "Nvme$subsystem", 00:07:52.207 "trtype": "$TEST_TRANSPORT", 00:07:52.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.207 "adrfam": "ipv4", 00:07:52.207 "trsvcid": "$NVMF_PORT", 00:07:52.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.207 "hdgst": ${hdgst:-false}, 00:07:52.207 "ddgst": ${ddgst:-false} 00:07:52.207 }, 00:07:52.207 "method": "bdev_nvme_attach_controller" 00:07:52.207 } 00:07:52.207 EOF 00:07:52.207 )") 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.207 "params": { 00:07:52.207 "name": "Nvme1", 00:07:52.207 "trtype": "tcp", 00:07:52.207 "traddr": "10.0.0.3", 00:07:52.207 "adrfam": "ipv4", 00:07:52.207 "trsvcid": "4420", 00:07:52.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.207 "hdgst": false, 00:07:52.207 "ddgst": false 00:07:52.207 }, 00:07:52.207 "method": "bdev_nvme_attach_controller" 00:07:52.207 }' 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.207 { 00:07:52.207 "params": { 00:07:52.207 "name": "Nvme$subsystem", 00:07:52.207 "trtype": "$TEST_TRANSPORT", 00:07:52.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.207 "adrfam": "ipv4", 00:07:52.207 "trsvcid": "$NVMF_PORT", 00:07:52.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.207 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.207 "hdgst": ${hdgst:-false}, 00:07:52.207 "ddgst": ${ddgst:-false} 00:07:52.207 }, 00:07:52.207 "method": "bdev_nvme_attach_controller" 00:07:52.207 } 00:07:52.207 EOF 00:07:52.207 )") 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.207 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.207 "params": { 00:07:52.207 "name": "Nvme1", 00:07:52.208 "trtype": "tcp", 00:07:52.208 "traddr": "10.0.0.3", 00:07:52.208 "adrfam": "ipv4", 00:07:52.208 "trsvcid": "4420", 00:07:52.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.208 "hdgst": false, 00:07:52.208 "ddgst": false 00:07:52.208 }, 00:07:52.208 "method": "bdev_nvme_attach_controller" 00:07:52.208 }' 00:07:52.208 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.208 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:07:52.208 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.208 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.208 "params": { 00:07:52.208 "name": "Nvme1", 00:07:52.208 "trtype": "tcp", 00:07:52.208 "traddr": "10.0.0.3", 00:07:52.208 "adrfam": "ipv4", 00:07:52.208 "trsvcid": "4420", 00:07:52.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.208 "hdgst": false, 00:07:52.208 "ddgst": false 00:07:52.208 }, 00:07:52.208 "method": "bdev_nvme_attach_controller" 00:07:52.208 }' 00:07:52.208 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:07:52.208 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.208 "params": { 00:07:52.208 "name": "Nvme1", 00:07:52.208 "trtype": "tcp", 00:07:52.208 "traddr": "10.0.0.3", 00:07:52.208 "adrfam": "ipv4", 00:07:52.208 "trsvcid": "4420", 00:07:52.208 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.208 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:52.208 "hdgst": false, 00:07:52.208 "ddgst": false 00:07:52.208 }, 00:07:52.208 "method": "bdev_nvme_attach_controller" 00:07:52.208 }' 00:07:52.208 [2024-11-05 09:30:37.957792] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:52.208 [2024-11-05 09:30:37.957895] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:52.208 [2024-11-05 09:30:37.984127] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:52.208 [2024-11-05 09:30:37.984216] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:52.208 [2024-11-05 09:30:37.987285] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:52.208 [2024-11-05 09:30:37.987532] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:52.208 09:30:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 63861 00:07:52.208 [2024-11-05 09:30:37.992095] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:52.208 [2024-11-05 09:30:37.992166] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:52.208 [2024-11-05 09:30:38.151720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.467 [2024-11-05 09:30:38.183160] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:52.467 [2024-11-05 09:30:38.194536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.467 [2024-11-05 09:30:38.197001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.467 [2024-11-05 09:30:38.225179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:52.467 [2024-11-05 09:30:38.237170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.467 [2024-11-05 09:30:38.238942] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.467 [2024-11-05 09:30:38.268526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:52.467 [2024-11-05 09:30:38.281979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.467 [2024-11-05 09:30:38.282536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.467 Running I/O for 1 seconds... 00:07:52.467 [2024-11-05 09:30:38.313101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:52.467 [2024-11-05 09:30:38.327725] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.467 Running I/O for 1 seconds... 00:07:52.467 Running I/O for 1 seconds... 00:07:52.726 Running I/O for 1 seconds... 00:07:53.662 6232.00 IOPS, 24.34 MiB/s 00:07:53.662 Latency(us) 00:07:53.662 [2024-11-05T09:30:39.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.662 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:53.662 Nvme1n1 : 1.02 6259.51 24.45 0.00 0.00 20303.14 6255.71 34555.35 00:07:53.662 [2024-11-05T09:30:39.620Z] =================================================================================================================== 00:07:53.662 [2024-11-05T09:30:39.620Z] Total : 6259.51 24.45 0.00 0.00 20303.14 6255.71 34555.35 00:07:53.662 165856.00 IOPS, 647.88 MiB/s 00:07:53.662 Latency(us) 00:07:53.662 [2024-11-05T09:30:39.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.662 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:53.662 Nvme1n1 : 1.00 165512.29 646.53 0.00 0.00 769.39 390.98 2040.55 00:07:53.662 [2024-11-05T09:30:39.620Z] =================================================================================================================== 00:07:53.662 [2024-11-05T09:30:39.621Z] Total : 165512.29 646.53 0.00 0.00 769.39 390.98 2040.55 00:07:53.663 7959.00 IOPS, 31.09 MiB/s 00:07:53.663 Latency(us) 00:07:53.663 [2024-11-05T09:30:39.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.663 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:53.663 Nvme1n1 : 1.01 8000.47 31.25 0.00 0.00 15906.61 9592.09 27286.81 00:07:53.663 [2024-11-05T09:30:39.621Z] =================================================================================================================== 00:07:53.663 [2024-11-05T09:30:39.621Z] Total : 8000.47 31.25 0.00 0.00 15906.61 9592.09 27286.81 00:07:53.663 6203.00 IOPS, 24.23 MiB/s [2024-11-05T09:30:39.621Z] 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 63863 00:07:53.663 00:07:53.663 Latency(us) 00:07:53.663 [2024-11-05T09:30:39.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.663 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:53.663 Nvme1n1 : 1.01 6341.51 24.77 0.00 0.00 20113.99 5391.83 47424.23 00:07:53.663 [2024-11-05T09:30:39.621Z] =================================================================================================================== 00:07:53.663 [2024-11-05T09:30:39.621Z] Total : 6341.51 24.77 0.00 0.00 20113.99 5391.83 47424.23 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 63865 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 63867 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.663 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.663 rmmod nvme_tcp 00:07:53.921 rmmod nvme_fabrics 00:07:53.921 rmmod nvme_keyring 00:07:53.921 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 63828 ']' 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 63828 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 63828 ']' 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 63828 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63828 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63828' 00:07:53.922 killing process with pid 63828 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 63828 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 63828 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:07:53.922 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.180 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:07:54.181 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:07:54.181 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:07:54.181 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:07:54.181 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:07:54.181 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:07:54.181 09:30:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:07:54.181 00:07:54.181 real 0m3.258s 00:07:54.181 user 0m12.907s 00:07:54.181 sys 0m1.974s 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:54.181 ************************************ 00:07:54.181 END TEST nvmf_bdev_io_wait 00:07:54.181 ************************************ 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:54.181 ************************************ 00:07:54.181 START TEST nvmf_queue_depth 00:07:54.181 ************************************ 00:07:54.181 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:54.441 * Looking for test storage... 00:07:54.441 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.441 --rc genhtml_branch_coverage=1 00:07:54.441 --rc genhtml_function_coverage=1 00:07:54.441 --rc genhtml_legend=1 00:07:54.441 --rc geninfo_all_blocks=1 00:07:54.441 --rc geninfo_unexecuted_blocks=1 00:07:54.441 00:07:54.441 ' 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.441 --rc genhtml_branch_coverage=1 00:07:54.441 --rc genhtml_function_coverage=1 00:07:54.441 --rc genhtml_legend=1 00:07:54.441 --rc geninfo_all_blocks=1 00:07:54.441 --rc geninfo_unexecuted_blocks=1 00:07:54.441 00:07:54.441 ' 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.441 --rc genhtml_branch_coverage=1 00:07:54.441 --rc genhtml_function_coverage=1 00:07:54.441 --rc genhtml_legend=1 00:07:54.441 --rc geninfo_all_blocks=1 00:07:54.441 --rc geninfo_unexecuted_blocks=1 00:07:54.441 00:07:54.441 ' 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:54.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.441 --rc genhtml_branch_coverage=1 00:07:54.441 --rc genhtml_function_coverage=1 00:07:54.441 --rc genhtml_legend=1 00:07:54.441 --rc geninfo_all_blocks=1 00:07:54.441 --rc geninfo_unexecuted_blocks=1 00:07:54.441 00:07:54.441 ' 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.441 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:54.442 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:07:54.442 Cannot find device "nvmf_init_br" 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:07:54.442 Cannot find device "nvmf_init_br2" 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:07:54.442 Cannot find device "nvmf_tgt_br" 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:07:54.442 Cannot find device "nvmf_tgt_br2" 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:07:54.442 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:07:54.702 Cannot find device "nvmf_init_br" 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:07:54.702 Cannot find device "nvmf_init_br2" 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:07:54.702 Cannot find device "nvmf_tgt_br" 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:07:54.702 Cannot find device "nvmf_tgt_br2" 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:07:54.702 Cannot find device "nvmf_br" 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:07:54.702 Cannot find device "nvmf_init_if" 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:07:54.702 Cannot find device "nvmf_init_if2" 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:54.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:54.702 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:54.702 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:07:54.962 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:54.962 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:07:54.962 00:07:54.962 --- 10.0.0.3 ping statistics --- 00:07:54.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.962 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:07:54.962 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:07:54.962 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:07:54.962 00:07:54.962 --- 10.0.0.4 ping statistics --- 00:07:54.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.962 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:54.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:54.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:54.962 00:07:54.962 --- 10.0.0.1 ping statistics --- 00:07:54.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.962 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:07:54.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:54.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:07:54.962 00:07:54.962 --- 10.0.0.2 ping statistics --- 00:07:54.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:54.962 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=64125 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 64125 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64125 ']' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:54.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.962 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:54.963 09:30:40 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:54.963 [2024-11-05 09:30:40.857439] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:54.963 [2024-11-05 09:30:40.857551] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.221 [2024-11-05 09:30:41.013218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.221 [2024-11-05 09:30:41.052939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:55.221 [2024-11-05 09:30:41.052998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:55.221 [2024-11-05 09:30:41.053013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:55.221 [2024-11-05 09:30:41.053034] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:55.221 [2024-11-05 09:30:41.053046] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:55.221 [2024-11-05 09:30:41.053456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.221 [2024-11-05 09:30:41.088459] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.221 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.221 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:55.221 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:55.221 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:55.221 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.221 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.480 [2024-11-05 09:30:41.187027] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.480 Malloc0 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.480 [2024-11-05 09:30:41.230022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=64149 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 64149 /var/tmp/bdevperf.sock 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 64149 ']' 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:55.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:55.480 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.481 [2024-11-05 09:30:41.293109] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:07:55.481 [2024-11-05 09:30:41.293212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64149 ] 00:07:55.739 [2024-11-05 09:30:41.441011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.739 [2024-11-05 09:30:41.482526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.739 [2024-11-05 09:30:41.514300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.739 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:55.739 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:07:55.740 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:55.740 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.740 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:55.740 NVMe0n1 00:07:55.740 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.740 09:30:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:55.998 Running I/O for 10 seconds... 00:07:57.872 6144.00 IOPS, 24.00 MiB/s [2024-11-05T09:30:45.207Z] 6405.00 IOPS, 25.02 MiB/s [2024-11-05T09:30:45.775Z] 6506.33 IOPS, 25.42 MiB/s [2024-11-05T09:30:47.213Z] 6699.25 IOPS, 26.17 MiB/s [2024-11-05T09:30:47.781Z] 6787.80 IOPS, 26.51 MiB/s [2024-11-05T09:30:49.161Z] 6860.67 IOPS, 26.80 MiB/s [2024-11-05T09:30:50.096Z] 6934.29 IOPS, 27.09 MiB/s [2024-11-05T09:30:51.032Z] 7043.00 IOPS, 27.51 MiB/s [2024-11-05T09:30:51.971Z] 7066.44 IOPS, 27.60 MiB/s [2024-11-05T09:30:51.971Z] 7109.30 IOPS, 27.77 MiB/s 00:08:06.013 Latency(us) 00:08:06.013 [2024-11-05T09:30:51.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.013 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:06.013 Verification LBA range: start 0x0 length 0x4000 00:08:06.013 NVMe0n1 : 10.08 7151.54 27.94 0.00 0.00 142412.66 17039.36 101997.85 00:08:06.013 [2024-11-05T09:30:51.971Z] =================================================================================================================== 00:08:06.013 [2024-11-05T09:30:51.971Z] Total : 7151.54 27.94 0.00 0.00 142412.66 17039.36 101997.85 00:08:06.013 { 00:08:06.013 "results": [ 00:08:06.013 { 00:08:06.013 "job": "NVMe0n1", 00:08:06.013 "core_mask": "0x1", 00:08:06.013 "workload": "verify", 00:08:06.013 "status": "finished", 00:08:06.013 "verify_range": { 00:08:06.013 "start": 0, 00:08:06.013 "length": 16384 00:08:06.013 }, 00:08:06.013 "queue_depth": 1024, 00:08:06.013 "io_size": 4096, 00:08:06.013 "runtime": 10.084123, 00:08:06.013 "iops": 7151.539107565427, 00:08:06.013 "mibps": 27.93569963892745, 00:08:06.013 "io_failed": 0, 00:08:06.013 "io_timeout": 0, 00:08:06.013 "avg_latency_us": 142412.66233925425, 00:08:06.013 "min_latency_us": 17039.36, 00:08:06.013 "max_latency_us": 101997.84727272727 00:08:06.013 } 00:08:06.013 ], 00:08:06.013 "core_count": 1 00:08:06.013 } 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 64149 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64149 ']' 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64149 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64149 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:06.013 killing process with pid 64149 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64149' 00:08:06.013 Received shutdown signal, test time was about 10.000000 seconds 00:08:06.013 00:08:06.013 Latency(us) 00:08:06.013 [2024-11-05T09:30:51.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:06.013 [2024-11-05T09:30:51.971Z] =================================================================================================================== 00:08:06.013 [2024-11-05T09:30:51.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64149 00:08:06.013 09:30:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64149 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:06.272 rmmod nvme_tcp 00:08:06.272 rmmod nvme_fabrics 00:08:06.272 rmmod nvme_keyring 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 64125 ']' 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 64125 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 64125 ']' 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 64125 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64125 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:06.272 killing process with pid 64125 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64125' 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 64125 00:08:06.272 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 64125 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:06.531 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:08:06.790 00:08:06.790 real 0m12.504s 00:08:06.790 user 0m21.174s 00:08:06.790 sys 0m2.229s 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:06.790 ************************************ 00:08:06.790 END TEST nvmf_queue_depth 00:08:06.790 ************************************ 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:06.790 09:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.791 09:30:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:06.791 ************************************ 00:08:06.791 START TEST nvmf_target_multipath 00:08:06.791 ************************************ 00:08:06.791 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:07.066 * Looking for test storage... 00:08:07.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.067 --rc genhtml_branch_coverage=1 00:08:07.067 --rc genhtml_function_coverage=1 00:08:07.067 --rc genhtml_legend=1 00:08:07.067 --rc geninfo_all_blocks=1 00:08:07.067 --rc geninfo_unexecuted_blocks=1 00:08:07.067 00:08:07.067 ' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.067 --rc genhtml_branch_coverage=1 00:08:07.067 --rc genhtml_function_coverage=1 00:08:07.067 --rc genhtml_legend=1 00:08:07.067 --rc geninfo_all_blocks=1 00:08:07.067 --rc geninfo_unexecuted_blocks=1 00:08:07.067 00:08:07.067 ' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.067 --rc genhtml_branch_coverage=1 00:08:07.067 --rc genhtml_function_coverage=1 00:08:07.067 --rc genhtml_legend=1 00:08:07.067 --rc geninfo_all_blocks=1 00:08:07.067 --rc geninfo_unexecuted_blocks=1 00:08:07.067 00:08:07.067 ' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.067 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.067 --rc genhtml_branch_coverage=1 00:08:07.067 --rc genhtml_function_coverage=1 00:08:07.067 --rc genhtml_legend=1 00:08:07.067 --rc geninfo_all_blocks=1 00:08:07.067 --rc geninfo_unexecuted_blocks=1 00:08:07.067 00:08:07.067 ' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:07.067 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:07.067 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:07.068 Cannot find device "nvmf_init_br" 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:07.068 Cannot find device "nvmf_init_br2" 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:07.068 Cannot find device "nvmf_tgt_br" 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:07.068 Cannot find device "nvmf_tgt_br2" 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:07.068 Cannot find device "nvmf_init_br" 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:07.068 Cannot find device "nvmf_init_br2" 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:07.068 Cannot find device "nvmf_tgt_br" 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:08:07.068 09:30:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:07.068 Cannot find device "nvmf_tgt_br2" 00:08:07.068 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:08:07.068 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:07.340 Cannot find device "nvmf_br" 00:08:07.340 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:08:07.340 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:07.340 Cannot find device "nvmf_init_if" 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:07.341 Cannot find device "nvmf_init_if2" 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:07.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:07.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:07.341 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:07.341 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:08:07.341 00:08:07.341 --- 10.0.0.3 ping statistics --- 00:08:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.341 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:07.341 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:07.341 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.088 ms 00:08:07.341 00:08:07.341 --- 10.0.0.4 ping statistics --- 00:08:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.341 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:07.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:07.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:08:07.341 00:08:07.341 --- 10.0.0.1 ping statistics --- 00:08:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.341 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:07.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:07.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:08:07.341 00:08:07.341 --- 10.0.0.2 ping statistics --- 00:08:07.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:07.341 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=64513 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 64513 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@833 -- # '[' -z 64513 ']' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:07.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:07.341 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:07.601 [2024-11-05 09:30:53.360274] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:08:07.601 [2024-11-05 09:30:53.360383] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.601 [2024-11-05 09:30:53.514932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:07.601 [2024-11-05 09:30:53.557139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:07.601 [2024-11-05 09:30:53.557199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:07.601 [2024-11-05 09:30:53.557213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:07.601 [2024-11-05 09:30:53.557223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:07.601 [2024-11-05 09:30:53.557232] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:07.601 [2024-11-05 09:30:53.558279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:07.601 [2024-11-05 09:30:53.558337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:07.601 [2024-11-05 09:30:53.558491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.601 [2024-11-05 09:30:53.558475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:07.859 [2024-11-05 09:30:53.606680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.859 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:07.859 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@866 -- # return 0 00:08:07.859 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:07.859 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:07.859 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:07.859 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:07.859 09:30:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:08.118 [2024-11-05 09:30:54.041561] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.118 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:08:08.377 Malloc0 00:08:08.635 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:08:08.894 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:09.153 09:30:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:09.153 [2024-11-05 09:30:55.092400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:09.153 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:08:09.412 [2024-11-05 09:30:55.348629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:08:09.412 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:08:09.670 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:08:09.670 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:08:09.670 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # local i=0 00:08:09.670 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:09.670 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:09.670 09:30:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # sleep 2 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # return 0 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:12.208 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=64595 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:12.209 09:30:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:08:12.209 [global] 00:08:12.209 thread=1 00:08:12.209 invalidate=1 00:08:12.209 rw=randrw 00:08:12.209 time_based=1 00:08:12.209 runtime=6 00:08:12.209 ioengine=libaio 00:08:12.209 direct=1 00:08:12.209 bs=4096 00:08:12.209 iodepth=128 00:08:12.209 norandommap=0 00:08:12.209 numjobs=1 00:08:12.209 00:08:12.209 verify_dump=1 00:08:12.209 verify_backlog=512 00:08:12.209 verify_state_save=0 00:08:12.209 do_verify=1 00:08:12.209 verify=crc32c-intel 00:08:12.209 [job0] 00:08:12.209 filename=/dev/nvme0n1 00:08:12.209 Could not set queue depth (nvme0n1) 00:08:12.209 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:12.209 fio-3.35 00:08:12.209 Starting 1 thread 00:08:12.777 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:13.035 09:30:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:13.294 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:13.867 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:14.128 09:30:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 64595 00:08:18.317 00:08:18.317 job0: (groupid=0, jobs=1): err= 0: pid=64616: Tue Nov 5 09:31:03 2024 00:08:18.317 read: IOPS=10.1k, BW=39.3MiB/s (41.2MB/s)(236MiB/6006msec) 00:08:18.317 slat (usec): min=4, max=7665, avg=60.18, stdev=245.12 00:08:18.317 clat (usec): min=1906, max=18297, avg=8753.76, stdev=1552.35 00:08:18.317 lat (usec): min=1915, max=18332, avg=8813.94, stdev=1556.80 00:08:18.317 clat percentiles (usec): 00:08:18.317 | 1.00th=[ 4490], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 7963], 00:08:18.317 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8586], 60.00th=[ 8848], 00:08:18.317 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[10028], 95.00th=[12387], 00:08:18.317 | 99.00th=[13829], 99.50th=[14091], 99.90th=[14615], 99.95th=[14877], 00:08:18.317 | 99.99th=[15270] 00:08:18.317 bw ( KiB/s): min= 7424, max=27560, per=50.22%, avg=20230.55, stdev=6256.45, samples=11 00:08:18.317 iops : min= 1856, max= 6890, avg=5057.64, stdev=1564.11, samples=11 00:08:18.317 write: IOPS=5775, BW=22.6MiB/s (23.7MB/s)(119MiB/5294msec); 0 zone resets 00:08:18.317 slat (usec): min=12, max=3653, avg=66.98, stdev=171.47 00:08:18.317 clat (usec): min=2685, max=14525, avg=7600.81, stdev=1376.55 00:08:18.317 lat (usec): min=2712, max=15192, avg=7667.79, stdev=1381.62 00:08:18.317 clat percentiles (usec): 00:08:18.317 | 1.00th=[ 3359], 5.00th=[ 4424], 10.00th=[ 5932], 20.00th=[ 7046], 00:08:18.317 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 7963], 00:08:18.317 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[ 8717], 95.00th=[ 9110], 00:08:18.317 | 99.00th=[11731], 99.50th=[12387], 99.90th=[13435], 99.95th=[14091], 00:08:18.317 | 99.99th=[14353] 00:08:18.317 bw ( KiB/s): min= 7376, max=27224, per=87.81%, avg=20286.55, stdev=6129.36, samples=11 00:08:18.317 iops : min= 1844, max= 6806, avg=5071.64, stdev=1532.34, samples=11 00:08:18.317 lat (msec) : 2=0.01%, 4=1.34%, 10=90.94%, 20=7.71% 00:08:18.317 cpu : usr=5.46%, sys=19.82%, ctx=5252, majf=0, minf=90 00:08:18.317 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:18.317 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.317 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:18.317 issued rwts: total=60484,30575,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:18.317 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:18.317 00:08:18.317 Run status group 0 (all jobs): 00:08:18.317 READ: bw=39.3MiB/s (41.2MB/s), 39.3MiB/s-39.3MiB/s (41.2MB/s-41.2MB/s), io=236MiB (248MB), run=6006-6006msec 00:08:18.317 WRITE: bw=22.6MiB/s (23.7MB/s), 22.6MiB/s-22.6MiB/s (23.7MB/s-23.7MB/s), io=119MiB (125MB), run=5294-5294msec 00:08:18.317 00:08:18.317 Disk stats (read/write): 00:08:18.317 nvme0n1: ios=59552/29972, merge=0/0, ticks=500817/214840, in_queue=715657, util=98.63% 00:08:18.317 09:31:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:08:18.576 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=64702 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:08:18.835 09:31:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:08:18.835 [global] 00:08:18.835 thread=1 00:08:18.835 invalidate=1 00:08:18.835 rw=randrw 00:08:18.835 time_based=1 00:08:18.835 runtime=6 00:08:18.835 ioengine=libaio 00:08:18.835 direct=1 00:08:18.835 bs=4096 00:08:18.835 iodepth=128 00:08:18.835 norandommap=0 00:08:18.835 numjobs=1 00:08:18.835 00:08:18.835 verify_dump=1 00:08:18.835 verify_backlog=512 00:08:18.835 verify_state_save=0 00:08:18.835 do_verify=1 00:08:18.835 verify=crc32c-intel 00:08:18.836 [job0] 00:08:18.836 filename=/dev/nvme0n1 00:08:18.836 Could not set queue depth (nvme0n1) 00:08:19.094 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:19.094 fio-3.35 00:08:19.094 Starting 1 thread 00:08:20.062 09:31:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:08:20.320 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:20.578 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:08:20.837 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:08:21.096 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:08:21.097 09:31:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 64702 00:08:25.347 00:08:25.347 job0: (groupid=0, jobs=1): err= 0: pid=64723: Tue Nov 5 09:31:10 2024 00:08:25.347 read: IOPS=11.4k, BW=44.6MiB/s (46.8MB/s)(268MiB/6008msec) 00:08:25.347 slat (usec): min=2, max=6176, avg=43.42, stdev=197.15 00:08:25.347 clat (usec): min=1352, max=16470, avg=7723.51, stdev=1984.96 00:08:25.347 lat (usec): min=1363, max=16479, avg=7766.93, stdev=2001.66 00:08:25.347 clat percentiles (usec): 00:08:25.347 | 1.00th=[ 3228], 5.00th=[ 4228], 10.00th=[ 4948], 20.00th=[ 5997], 00:08:25.347 | 30.00th=[ 7046], 40.00th=[ 7635], 50.00th=[ 8029], 60.00th=[ 8291], 00:08:25.347 | 70.00th=[ 8586], 80.00th=[ 8979], 90.00th=[ 9503], 95.00th=[11076], 00:08:25.347 | 99.00th=[13304], 99.50th=[13698], 99.90th=[14353], 99.95th=[14484], 00:08:25.347 | 99.99th=[15401] 00:08:25.347 bw ( KiB/s): min= 9936, max=37736, per=52.19%, avg=23838.67, stdev=8570.04, samples=12 00:08:25.347 iops : min= 2484, max= 9434, avg=5959.67, stdev=2142.51, samples=12 00:08:25.347 write: IOPS=6696, BW=26.2MiB/s (27.4MB/s)(140MiB/5345msec); 0 zone resets 00:08:25.347 slat (usec): min=11, max=2210, avg=52.85, stdev=137.88 00:08:25.347 clat (usec): min=1639, max=15004, avg=6514.95, stdev=1835.47 00:08:25.347 lat (usec): min=1663, max=15343, avg=6567.80, stdev=1851.73 00:08:25.347 clat percentiles (usec): 00:08:25.347 | 1.00th=[ 2769], 5.00th=[ 3458], 10.00th=[ 3884], 20.00th=[ 4490], 00:08:25.347 | 30.00th=[ 5276], 40.00th=[ 6456], 50.00th=[ 7046], 60.00th=[ 7439], 00:08:25.347 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8455], 95.00th=[ 8717], 00:08:25.347 | 99.00th=[11207], 99.50th=[11994], 99.90th=[13435], 99.95th=[13960], 00:08:25.347 | 99.99th=[14353] 00:08:25.347 bw ( KiB/s): min=10456, max=37984, per=88.95%, avg=23826.67, stdev=8419.76, samples=12 00:08:25.347 iops : min= 2614, max= 9496, avg=5956.67, stdev=2104.94, samples=12 00:08:25.347 lat (msec) : 2=0.06%, 4=6.61%, 10=88.58%, 20=4.75% 00:08:25.347 cpu : usr=5.64%, sys=21.82%, ctx=5915, majf=0, minf=108 00:08:25.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:08:25.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:25.347 issued rwts: total=68599,35791,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:25.347 00:08:25.347 Run status group 0 (all jobs): 00:08:25.347 READ: bw=44.6MiB/s (46.8MB/s), 44.6MiB/s-44.6MiB/s (46.8MB/s-46.8MB/s), io=268MiB (281MB), run=6008-6008msec 00:08:25.347 WRITE: bw=26.2MiB/s (27.4MB/s), 26.2MiB/s-26.2MiB/s (27.4MB/s-27.4MB/s), io=140MiB (147MB), run=5345-5345msec 00:08:25.347 00:08:25.347 Disk stats (read/write): 00:08:25.347 nvme0n1: ios=67944/34932, merge=0/0, ticks=500556/211852, in_queue=712408, util=98.61% 00:08:25.347 09:31:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:25.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1221 -- # local i=0 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1233 -- # return 0 00:08:25.347 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.606 rmmod nvme_tcp 00:08:25.606 rmmod nvme_fabrics 00:08:25.606 rmmod nvme_keyring 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 64513 ']' 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 64513 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@952 -- # '[' -z 64513 ']' 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # kill -0 64513 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # uname 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64513 00:08:25.606 killing process with pid 64513 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64513' 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@971 -- # kill 64513 00:08:25.606 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@976 -- # wait 64513 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:25.864 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:25.865 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:08:26.124 ************************************ 00:08:26.124 END TEST nvmf_target_multipath 00:08:26.124 ************************************ 00:08:26.124 00:08:26.124 real 0m19.188s 00:08:26.124 user 1m11.061s 00:08:26.124 sys 0m9.883s 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:26.124 ************************************ 00:08:26.124 START TEST nvmf_zcopy 00:08:26.124 ************************************ 00:08:26.124 09:31:11 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:26.124 * Looking for test storage... 00:08:26.124 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:26.124 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:26.124 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:08:26.124 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:26.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.384 --rc genhtml_branch_coverage=1 00:08:26.384 --rc genhtml_function_coverage=1 00:08:26.384 --rc genhtml_legend=1 00:08:26.384 --rc geninfo_all_blocks=1 00:08:26.384 --rc geninfo_unexecuted_blocks=1 00:08:26.384 00:08:26.384 ' 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:26.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.384 --rc genhtml_branch_coverage=1 00:08:26.384 --rc genhtml_function_coverage=1 00:08:26.384 --rc genhtml_legend=1 00:08:26.384 --rc geninfo_all_blocks=1 00:08:26.384 --rc geninfo_unexecuted_blocks=1 00:08:26.384 00:08:26.384 ' 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:26.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.384 --rc genhtml_branch_coverage=1 00:08:26.384 --rc genhtml_function_coverage=1 00:08:26.384 --rc genhtml_legend=1 00:08:26.384 --rc geninfo_all_blocks=1 00:08:26.384 --rc geninfo_unexecuted_blocks=1 00:08:26.384 00:08:26.384 ' 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:26.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.384 --rc genhtml_branch_coverage=1 00:08:26.384 --rc genhtml_function_coverage=1 00:08:26.384 --rc genhtml_legend=1 00:08:26.384 --rc geninfo_all_blocks=1 00:08:26.384 --rc geninfo_unexecuted_blocks=1 00:08:26.384 00:08:26.384 ' 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:26.384 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:26.385 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:26.385 Cannot find device "nvmf_init_br" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:26.385 Cannot find device "nvmf_init_br2" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:26.385 Cannot find device "nvmf_tgt_br" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:26.385 Cannot find device "nvmf_tgt_br2" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:26.385 Cannot find device "nvmf_init_br" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:26.385 Cannot find device "nvmf_init_br2" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:26.385 Cannot find device "nvmf_tgt_br" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:26.385 Cannot find device "nvmf_tgt_br2" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:26.385 Cannot find device "nvmf_br" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:26.385 Cannot find device "nvmf_init_if" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:26.385 Cannot find device "nvmf_init_if2" 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:26.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:26.385 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:26.385 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:26.386 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:26.645 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:26.645 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:08:26.645 00:08:26.645 --- 10.0.0.3 ping statistics --- 00:08:26.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.645 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:26.645 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:26.645 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.036 ms 00:08:26.645 00:08:26.645 --- 10.0.0.4 ping statistics --- 00:08:26.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.645 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:26.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:26.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:08:26.645 00:08:26.645 --- 10.0.0.1 ping statistics --- 00:08:26.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.645 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:26.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:26.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.039 ms 00:08:26.645 00:08:26.645 --- 10.0.0.2 ping statistics --- 00:08:26.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:26.645 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=65030 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 65030 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 65030 ']' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:26.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:26.645 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:26.645 [2024-11-05 09:31:12.578117] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:08:26.645 [2024-11-05 09:31:12.578205] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.904 [2024-11-05 09:31:12.729113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.904 [2024-11-05 09:31:12.766373] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:26.904 [2024-11-05 09:31:12.766432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:26.904 [2024-11-05 09:31:12.766446] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:26.905 [2024-11-05 09:31:12.766456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:26.905 [2024-11-05 09:31:12.766465] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:26.905 [2024-11-05 09:31:12.766824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.905 [2024-11-05 09:31:12.800955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.905 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.905 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:08:26.905 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:26.905 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.905 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.164 [2024-11-05 09:31:12.907392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.164 [2024-11-05 09:31:12.923507] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.164 malloc0 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:27.164 { 00:08:27.164 "params": { 00:08:27.164 "name": "Nvme$subsystem", 00:08:27.164 "trtype": "$TEST_TRANSPORT", 00:08:27.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.164 "adrfam": "ipv4", 00:08:27.164 "trsvcid": "$NVMF_PORT", 00:08:27.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.164 "hdgst": ${hdgst:-false}, 00:08:27.164 "ddgst": ${ddgst:-false} 00:08:27.164 }, 00:08:27.164 "method": "bdev_nvme_attach_controller" 00:08:27.164 } 00:08:27.164 EOF 00:08:27.164 )") 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:27.164 09:31:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:27.164 "params": { 00:08:27.164 "name": "Nvme1", 00:08:27.164 "trtype": "tcp", 00:08:27.164 "traddr": "10.0.0.3", 00:08:27.164 "adrfam": "ipv4", 00:08:27.164 "trsvcid": "4420", 00:08:27.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:27.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:27.164 "hdgst": false, 00:08:27.164 "ddgst": false 00:08:27.164 }, 00:08:27.164 "method": "bdev_nvme_attach_controller" 00:08:27.164 }' 00:08:27.164 [2024-11-05 09:31:13.017290] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:08:27.164 [2024-11-05 09:31:13.017401] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65054 ] 00:08:27.423 [2024-11-05 09:31:13.174241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.423 [2024-11-05 09:31:13.214273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.423 [2024-11-05 09:31:13.256552] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.423 Running I/O for 10 seconds... 00:08:29.735 6156.00 IOPS, 48.09 MiB/s [2024-11-05T09:31:16.629Z] 6031.00 IOPS, 47.12 MiB/s [2024-11-05T09:31:17.565Z] 5988.33 IOPS, 46.78 MiB/s [2024-11-05T09:31:18.503Z] 5964.25 IOPS, 46.60 MiB/s [2024-11-05T09:31:19.446Z] 5959.00 IOPS, 46.55 MiB/s [2024-11-05T09:31:20.380Z] 5955.83 IOPS, 46.53 MiB/s [2024-11-05T09:31:21.756Z] 5962.29 IOPS, 46.58 MiB/s [2024-11-05T09:31:22.693Z] 5961.25 IOPS, 46.57 MiB/s [2024-11-05T09:31:23.667Z] 5970.22 IOPS, 46.64 MiB/s [2024-11-05T09:31:23.667Z] 5970.70 IOPS, 46.65 MiB/s 00:08:37.709 Latency(us) 00:08:37.709 [2024-11-05T09:31:23.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.709 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:37.709 Verification LBA range: start 0x0 length 0x1000 00:08:37.709 Nvme1n1 : 10.01 5971.21 46.65 0.00 0.00 21365.11 558.55 34317.03 00:08:37.709 [2024-11-05T09:31:23.667Z] =================================================================================================================== 00:08:37.709 [2024-11-05T09:31:23.667Z] Total : 5971.21 46.65 0.00 0.00 21365.11 558.55 34317.03 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=65167 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:37.709 { 00:08:37.709 "params": { 00:08:37.709 "name": "Nvme$subsystem", 00:08:37.709 "trtype": "$TEST_TRANSPORT", 00:08:37.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:37.709 "adrfam": "ipv4", 00:08:37.709 "trsvcid": "$NVMF_PORT", 00:08:37.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:37.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:37.709 "hdgst": ${hdgst:-false}, 00:08:37.709 "ddgst": ${ddgst:-false} 00:08:37.709 }, 00:08:37.709 "method": "bdev_nvme_attach_controller" 00:08:37.709 } 00:08:37.709 EOF 00:08:37.709 )") 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:08:37.709 [2024-11-05 09:31:23.521071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.521115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:08:37.709 09:31:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:37.709 "params": { 00:08:37.709 "name": "Nvme1", 00:08:37.709 "trtype": "tcp", 00:08:37.709 "traddr": "10.0.0.3", 00:08:37.709 "adrfam": "ipv4", 00:08:37.709 "trsvcid": "4420", 00:08:37.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:37.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:37.709 "hdgst": false, 00:08:37.709 "ddgst": false 00:08:37.709 }, 00:08:37.709 "method": "bdev_nvme_attach_controller" 00:08:37.709 }' 00:08:37.709 [2024-11-05 09:31:23.533043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.533071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.545039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.545067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.557044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.557070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.569051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.569078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.573856] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:08:37.709 [2024-11-05 09:31:23.573942] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65167 ] 00:08:37.709 [2024-11-05 09:31:23.581054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.581078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.593087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.593122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.605063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.605090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.617067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.617092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.629071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.629096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.641070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.641094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.649072] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.649097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.709 [2024-11-05 09:31:23.657101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.709 [2024-11-05 09:31:23.657131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.669139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.669184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.681113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.681152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.693105] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.693138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.705140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.705183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.717127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.717164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.724664] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.968 [2024-11-05 09:31:23.729123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.729157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.741135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.741173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.753148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.753193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.758384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.968 [2024-11-05 09:31:23.765130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.765159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.777151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.777191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.789166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.789214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.798548] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.968 [2024-11-05 09:31:23.801165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.801224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.813171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.813226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.825142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.825172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.837230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.837285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.968 [2024-11-05 09:31:23.849223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.968 [2024-11-05 09:31:23.849267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.969 [2024-11-05 09:31:23.861215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.969 [2024-11-05 09:31:23.861258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.969 [2024-11-05 09:31:23.873219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.969 [2024-11-05 09:31:23.873267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.969 [2024-11-05 09:31:23.885247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.969 [2024-11-05 09:31:23.885298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.969 [2024-11-05 09:31:23.897273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.969 [2024-11-05 09:31:23.897302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.969 Running I/O for 5 seconds... 00:08:37.969 [2024-11-05 09:31:23.909239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.969 [2024-11-05 09:31:23.909280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.227 [2024-11-05 09:31:23.928707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.227 [2024-11-05 09:31:23.928776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.227 [2024-11-05 09:31:23.944494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.227 [2024-11-05 09:31:23.944538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.227 [2024-11-05 09:31:23.959882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.227 [2024-11-05 09:31:23.959970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.227 [2024-11-05 09:31:23.969633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.227 [2024-11-05 09:31:23.969665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:23.985986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:23.986035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.001577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.001636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.010908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.010971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.027209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.027250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.045533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.045570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.061497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.061533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.078296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.078329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.094816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.094880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.112148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.112181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.129400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.129456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.145392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.145439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.155110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.155150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.228 [2024-11-05 09:31:24.170785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.228 [2024-11-05 09:31:24.170836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.486 [2024-11-05 09:31:24.188576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.188645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.203882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.203960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.214012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.214044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.225867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.225937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.241537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.241587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.253725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.253770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.270389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.270424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.285941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.285991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.304643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.304687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.320266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.320303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.337542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.337587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.353863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.353902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.369907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.369948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.386795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.386861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.404136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.404183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.419356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.419403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.428413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.428450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.487 [2024-11-05 09:31:24.444866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.487 [2024-11-05 09:31:24.444904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.455349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.455395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.470122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.470157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.487424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.487468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.503964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.504000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.521756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.521823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.537752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.537816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.554371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.554410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.570819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.570879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.588594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.588628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.603377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.603415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.619013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.619049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.628179] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.628211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.643754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.643799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.658867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.658908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.668927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.668959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.684167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.684203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.746 [2024-11-05 09:31:24.702149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.746 [2024-11-05 09:31:24.702183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.717165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.717204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.727035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.727070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.741923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.741963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.757683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.757723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.774994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.775034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.791336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.791375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.800948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.800987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.816782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.816830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.833216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.833270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.849521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.849561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.866687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.866728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.882909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.882945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.899296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.899331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 11429.00 IOPS, 89.29 MiB/s [2024-11-05T09:31:24.963Z] [2024-11-05 09:31:24.915781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.915854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.933756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.933796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.948299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.948337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.005 [2024-11-05 09:31:24.963477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.005 [2024-11-05 09:31:24.963514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.264 [2024-11-05 09:31:24.973073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.264 [2024-11-05 09:31:24.973107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.264 [2024-11-05 09:31:24.988998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.264 [2024-11-05 09:31:24.989034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.264 [2024-11-05 09:31:25.007223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.264 [2024-11-05 09:31:25.007268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.264 [2024-11-05 09:31:25.022364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.264 [2024-11-05 09:31:25.022403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.264 [2024-11-05 09:31:25.031751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.031783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.048467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.048506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.065145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.065181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.082339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.082383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.100169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.100207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.115206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.115263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.130883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.130933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.139739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.139773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.156238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.156273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.172711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.172749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.190993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.191027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.205819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.205899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.265 [2024-11-05 09:31:25.215338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.265 [2024-11-05 09:31:25.215372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.231653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.231709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.247158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.247195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.263351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.263385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.280361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.280396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.296858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.296902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.313141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.313178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.323041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.323073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.338391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.338438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.356214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.356257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.371082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.371135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.386300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.386333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.402100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.402137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.419202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.419244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.436719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.436767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.451245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.451280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.524 [2024-11-05 09:31:25.466788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.524 [2024-11-05 09:31:25.466861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.485055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.485115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.499819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.499874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.515452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.515487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.533001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.533034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.548008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.548047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.557615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.557648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.573810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.573860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.592467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.592506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.607238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.607280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.618809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.618862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.634660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.634699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.652447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.652488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.667817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.667891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.677367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.677419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.693293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.693332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.709960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.710001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:39.783 [2024-11-05 09:31:25.726163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:39.783 [2024-11-05 09:31:25.726205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.743927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.743965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.759761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.759796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.778250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.778284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.793448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.793515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.811490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.811529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.826155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.826191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.842015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.842050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.859732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.859767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.875740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.875773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.894197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.894253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.909539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.909600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 11418.50 IOPS, 89.21 MiB/s [2024-11-05T09:31:26.000Z] [2024-11-05 09:31:25.927969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.928027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.943490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.943527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.960964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.960996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.977510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.977541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.042 [2024-11-05 09:31:25.996216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.042 [2024-11-05 09:31:25.996251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.011412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.011441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.027440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.027475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.044090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.044156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.062608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.062657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.077610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.077660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.087703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.087747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.103250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.103295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.121253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.121283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.135349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.135402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.151572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.151618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.168388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.168420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.185366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.185398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.201925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.201967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.219278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.219341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.235487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.235560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.302 [2024-11-05 09:31:26.252338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.302 [2024-11-05 09:31:26.252371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.268739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.268791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.286715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.286759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.302402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.302448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.319985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.320029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.335947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.336011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.351912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.351986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.361466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.361541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.378095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.378142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.393239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.393276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.410447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.410493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.427168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.427216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.443953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.443996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.462048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.462087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.476960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.477011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.493334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.493383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.562 [2024-11-05 09:31:26.510449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.562 [2024-11-05 09:31:26.510509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.526062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.526128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.536010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.536067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.551786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.551830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.567977] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.568022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.585593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.585637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.602866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.602927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.619614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.619654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.635699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.635759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.654652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.654711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.670240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.670271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.821 [2024-11-05 09:31:26.687434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.821 [2024-11-05 09:31:26.687466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.822 [2024-11-05 09:31:26.704274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.822 [2024-11-05 09:31:26.704306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.822 [2024-11-05 09:31:26.721736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.822 [2024-11-05 09:31:26.721778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.822 [2024-11-05 09:31:26.736920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.822 [2024-11-05 09:31:26.736965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.822 [2024-11-05 09:31:26.752804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.822 [2024-11-05 09:31:26.752847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:40.822 [2024-11-05 09:31:26.769151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:40.822 [2024-11-05 09:31:26.769183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.786032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.786061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.802039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.802082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.819681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.819725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.834440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.834500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.852100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.852148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.867768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.867800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.885643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.885677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.896299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.896352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 11404.33 IOPS, 89.10 MiB/s [2024-11-05T09:31:27.040Z] [2024-11-05 09:31:26.910591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.910624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.926705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.926751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.944399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.944431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.960271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.960303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.978315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.978360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:26.993178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:26.993221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:27.010048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:27.010092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:27.025889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:27.025972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.082 [2024-11-05 09:31:27.035251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.082 [2024-11-05 09:31:27.035296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.051058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.051088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.067854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.067929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.084813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.084856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.100991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.101022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.120403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.120454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.135060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.135103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.150808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.150863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.167609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.167654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.184496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.184540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.201185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.201249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.219089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.219134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.233778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.233821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.250378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.250425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.265778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.265823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.276005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.276038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.342 [2024-11-05 09:31:27.292992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.342 [2024-11-05 09:31:27.293025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.309056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.309091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.326768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.326813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.343873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.343944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.360011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.360042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.376707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.376752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.395225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.395256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.410391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.410423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.420158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.420204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.435751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.435797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.454599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.454644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.469803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.469848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.479651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.479694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.496305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.496337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.512003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.512032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.529426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.529459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.601 [2024-11-05 09:31:27.545036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.601 [2024-11-05 09:31:27.545070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.564024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.564058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.579447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.579480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.596156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.596188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.614454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.614487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.629905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.629967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.647545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.647590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.662837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.662926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.672524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.672583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.689237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.689269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.704088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.704131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.719839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.719892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.736306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.736354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.745829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.745888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.762332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.762365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.778800] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.778847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.795050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.795081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:41.861 [2024-11-05 09:31:27.811967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:41.861 [2024-11-05 09:31:27.812020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.120 [2024-11-05 09:31:27.828813] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.120 [2024-11-05 09:31:27.828863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.120 [2024-11-05 09:31:27.845740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.120 [2024-11-05 09:31:27.845790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.120 [2024-11-05 09:31:27.862950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.120 [2024-11-05 09:31:27.862984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.120 [2024-11-05 09:31:27.879079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.120 [2024-11-05 09:31:27.879126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.120 [2024-11-05 09:31:27.895314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.120 [2024-11-05 09:31:27.895351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.120 11376.00 IOPS, 88.88 MiB/s [2024-11-05T09:31:28.078Z] [2024-11-05 09:31:27.912114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.120 [2024-11-05 09:31:27.912174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:27.928605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:27.928668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:27.945101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:27.945143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:27.963365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:27.963413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:27.979345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:27.979383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:27.995773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:27.995832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:28.013253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:28.013313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:28.029524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:28.029586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:28.045689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:28.045748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:28.061685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:28.061745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.121 [2024-11-05 09:31:28.077907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.121 [2024-11-05 09:31:28.077954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.380 [2024-11-05 09:31:28.094802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.380 [2024-11-05 09:31:28.094847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.380 [2024-11-05 09:31:28.111742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.380 [2024-11-05 09:31:28.111782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.380 [2024-11-05 09:31:28.127818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.380 [2024-11-05 09:31:28.127871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.380 [2024-11-05 09:31:28.143868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.380 [2024-11-05 09:31:28.143939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.380 [2024-11-05 09:31:28.161840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.161910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.177683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.177729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.195082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.195126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.210668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.210711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.220663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.220707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.237017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.237048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.252606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.252650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.267771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.267813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.279000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.279058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.294562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.294607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.310869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.310955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.381 [2024-11-05 09:31:28.330196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.381 [2024-11-05 09:31:28.330241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.344495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.344539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.360709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.360751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.374968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.375011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.390973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.391018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.407798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.407842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.423903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.423957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.442584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.442628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.457995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.458065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.474865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.474937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.491256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.491303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.640 [2024-11-05 09:31:28.507260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.640 [2024-11-05 09:31:28.507293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.641 [2024-11-05 09:31:28.525999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.641 [2024-11-05 09:31:28.526041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.641 [2024-11-05 09:31:28.540963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.641 [2024-11-05 09:31:28.540993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.641 [2024-11-05 09:31:28.550790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.641 [2024-11-05 09:31:28.550819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.641 [2024-11-05 09:31:28.566572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.641 [2024-11-05 09:31:28.566615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.641 [2024-11-05 09:31:28.583402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.641 [2024-11-05 09:31:28.583447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.601107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.601150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.617080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.617137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.626649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.626695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.640332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.640377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.654672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.654715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.670727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.670773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.686657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.686717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.704239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.704273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.719626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.719670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.729530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.729573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.745116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.745172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.762124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.762156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.780018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.780064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.795270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.795303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.811715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.811776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.827691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.827738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:42.900 [2024-11-05 09:31:28.846839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:42.900 [2024-11-05 09:31:28.846930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.159 [2024-11-05 09:31:28.861899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.159 [2024-11-05 09:31:28.861960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.159 [2024-11-05 09:31:28.871340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.159 [2024-11-05 09:31:28.871385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.159 [2024-11-05 09:31:28.887923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.159 [2024-11-05 09:31:28.887961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.159 [2024-11-05 09:31:28.902914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.159 [2024-11-05 09:31:28.902958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.159 11370.80 IOPS, 88.83 MiB/s [2024-11-05T09:31:29.117Z] [2024-11-05 09:31:28.918282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.159 [2024-11-05 09:31:28.918314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.159 00:08:43.159 Latency(us) 00:08:43.159 [2024-11-05T09:31:29.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:43.159 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:43.159 Nvme1n1 : 5.01 11371.13 88.84 0.00 0.00 11240.71 4676.89 23950.43 00:08:43.159 [2024-11-05T09:31:29.117Z] =================================================================================================================== 00:08:43.159 [2024-11-05T09:31:29.117Z] Total : 11371.13 88.84 0.00 0.00 11240.71 4676.89 23950.43 00:08:43.160 [2024-11-05 09:31:28.927869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:28.927927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:28.939868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:28.939940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:28.951911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:28.951976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:28.963895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:28.963959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:28.975899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:28.975962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:28.987918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:28.987983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:28.999913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:28.999966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:29.011915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:29.011962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:29.023892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:29.023956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:29.035915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:29.035983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:29.047916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:29.047942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 [2024-11-05 09:31:29.059917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:43.160 [2024-11-05 09:31:29.059941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:43.160 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (65167) - No such process 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 65167 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.160 delay0 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.160 09:31:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:08:43.419 [2024-11-05 09:31:29.277771] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:51.593 Initializing NVMe Controllers 00:08:51.593 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:08:51.593 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:51.593 Initialization complete. Launching workers. 00:08:51.593 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 298, failed: 11935 00:08:51.593 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12177, failed to submit 56 00:08:51.593 success 12023, unsuccessful 154, failed 0 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:51.593 rmmod nvme_tcp 00:08:51.593 rmmod nvme_fabrics 00:08:51.593 rmmod nvme_keyring 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 65030 ']' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 65030 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 65030 ']' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 65030 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65030 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:08:51.593 killing process with pid 65030 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65030' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 65030 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 65030 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:08:51.593 00:08:51.593 real 0m24.928s 00:08:51.593 user 0m40.979s 00:08:51.593 sys 0m6.883s 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:51.593 ************************************ 00:08:51.593 END TEST nvmf_zcopy 00:08:51.593 ************************************ 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:51.593 ************************************ 00:08:51.593 START TEST nvmf_nmic 00:08:51.593 ************************************ 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:51.593 * Looking for test storage... 00:08:51.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:08:51.593 09:31:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.593 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:51.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.594 --rc genhtml_branch_coverage=1 00:08:51.594 --rc genhtml_function_coverage=1 00:08:51.594 --rc genhtml_legend=1 00:08:51.594 --rc geninfo_all_blocks=1 00:08:51.594 --rc geninfo_unexecuted_blocks=1 00:08:51.594 00:08:51.594 ' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:51.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.594 --rc genhtml_branch_coverage=1 00:08:51.594 --rc genhtml_function_coverage=1 00:08:51.594 --rc genhtml_legend=1 00:08:51.594 --rc geninfo_all_blocks=1 00:08:51.594 --rc geninfo_unexecuted_blocks=1 00:08:51.594 00:08:51.594 ' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:51.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.594 --rc genhtml_branch_coverage=1 00:08:51.594 --rc genhtml_function_coverage=1 00:08:51.594 --rc genhtml_legend=1 00:08:51.594 --rc geninfo_all_blocks=1 00:08:51.594 --rc geninfo_unexecuted_blocks=1 00:08:51.594 00:08:51.594 ' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:51.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.594 --rc genhtml_branch_coverage=1 00:08:51.594 --rc genhtml_function_coverage=1 00:08:51.594 --rc genhtml_legend=1 00:08:51.594 --rc geninfo_all_blocks=1 00:08:51.594 --rc geninfo_unexecuted_blocks=1 00:08:51.594 00:08:51.594 ' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:51.594 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:51.594 Cannot find device "nvmf_init_br" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:51.594 Cannot find device "nvmf_init_br2" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:51.594 Cannot find device "nvmf_tgt_br" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:51.594 Cannot find device "nvmf_tgt_br2" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:51.594 Cannot find device "nvmf_init_br" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:51.594 Cannot find device "nvmf_init_br2" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:51.594 Cannot find device "nvmf_tgt_br" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:51.594 Cannot find device "nvmf_tgt_br2" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:51.594 Cannot find device "nvmf_br" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:51.594 Cannot find device "nvmf_init_if" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:51.594 Cannot find device "nvmf_init_if2" 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:51.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:51.594 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:51.594 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:51.595 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:51.595 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:51.595 00:08:51.595 --- 10.0.0.3 ping statistics --- 00:08:51.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.595 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:51.595 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:51.595 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.061 ms 00:08:51.595 00:08:51.595 --- 10.0.0.4 ping statistics --- 00:08:51.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.595 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:51.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:08:51.595 00:08:51.595 --- 10.0.0.1 ping statistics --- 00:08:51.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.595 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:51.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:51.595 00:08:51.595 --- 10.0.0.2 ping statistics --- 00:08:51.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.595 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=65552 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 65552 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 65552 ']' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:51.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:51.595 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:51.853 [2024-11-05 09:31:37.578117] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:08:51.853 [2024-11-05 09:31:37.578211] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.853 [2024-11-05 09:31:37.728980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.853 [2024-11-05 09:31:37.768454] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.853 [2024-11-05 09:31:37.768510] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.853 [2024-11-05 09:31:37.768530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:51.853 [2024-11-05 09:31:37.768540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:51.853 [2024-11-05 09:31:37.768549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.853 [2024-11-05 09:31:37.769540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.853 [2024-11-05 09:31:37.769602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.853 [2024-11-05 09:31:37.769687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.853 [2024-11-05 09:31:37.769693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.113 [2024-11-05 09:31:37.826935] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 [2024-11-05 09:31:37.933875] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 Malloc0 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 [2024-11-05 09:31:37.989629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:52.113 test case1: single bdev can't be used in multiple subsystems 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:37 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.113 [2024-11-05 09:31:38.013433] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:52.113 [2024-11-05 09:31:38.013470] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:52.113 [2024-11-05 09:31:38.013483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:52.113 request: 00:08:52.113 { 00:08:52.113 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:52.113 "namespace": { 00:08:52.113 "bdev_name": "Malloc0", 00:08:52.113 "no_auto_visible": false 00:08:52.113 }, 00:08:52.113 "method": "nvmf_subsystem_add_ns", 00:08:52.113 "req_id": 1 00:08:52.113 } 00:08:52.113 Got JSON-RPC error response 00:08:52.113 response: 00:08:52.113 { 00:08:52.113 "code": -32602, 00:08:52.113 "message": "Invalid parameters" 00:08:52.113 } 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:52.113 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:52.113 Adding namespace failed - expected result. 00:08:52.113 test case2: host connect to nvmf target in multiple paths 00:08:52.114 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:52.114 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:52.114 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:08:52.114 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.114 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:52.114 [2024-11-05 09:31:38.025581] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:08:52.114 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.114 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:08:52.372 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:08:52.372 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.372 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:08:52.372 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.373 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:08:52.373 09:31:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:08:54.903 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:08:54.903 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:08:54.903 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.903 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:08:54.903 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.903 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:08:54.903 09:31:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:54.903 [global] 00:08:54.903 thread=1 00:08:54.903 invalidate=1 00:08:54.903 rw=write 00:08:54.903 time_based=1 00:08:54.903 runtime=1 00:08:54.903 ioengine=libaio 00:08:54.903 direct=1 00:08:54.903 bs=4096 00:08:54.903 iodepth=1 00:08:54.903 norandommap=0 00:08:54.903 numjobs=1 00:08:54.903 00:08:54.903 verify_dump=1 00:08:54.903 verify_backlog=512 00:08:54.903 verify_state_save=0 00:08:54.903 do_verify=1 00:08:54.903 verify=crc32c-intel 00:08:54.903 [job0] 00:08:54.903 filename=/dev/nvme0n1 00:08:54.903 Could not set queue depth (nvme0n1) 00:08:54.903 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.903 fio-3.35 00:08:54.903 Starting 1 thread 00:08:55.843 00:08:55.843 job0: (groupid=0, jobs=1): err= 0: pid=65636: Tue Nov 5 09:31:41 2024 00:08:55.843 read: IOPS=2846, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec) 00:08:55.843 slat (nsec): min=11684, max=61469, avg=15240.51, stdev=4536.68 00:08:55.843 clat (usec): min=135, max=5506, avg=190.40, stdev=224.22 00:08:55.843 lat (usec): min=148, max=5524, avg=205.64, stdev=224.77 00:08:55.843 clat percentiles (usec): 00:08:55.843 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:08:55.843 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 180], 00:08:55.843 | 70.00th=[ 188], 80.00th=[ 194], 90.00th=[ 206], 95.00th=[ 215], 00:08:55.843 | 99.00th=[ 231], 99.50th=[ 253], 99.90th=[ 4113], 99.95th=[ 4359], 00:08:55.843 | 99.99th=[ 5538] 00:08:55.843 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:08:55.843 slat (usec): min=14, max=117, avg=22.22, stdev= 6.48 00:08:55.843 clat (usec): min=82, max=230, avg=109.37, stdev=16.32 00:08:55.843 lat (usec): min=100, max=347, avg=131.59, stdev=18.37 00:08:55.843 clat percentiles (usec): 00:08:55.843 | 1.00th=[ 87], 5.00th=[ 91], 10.00th=[ 93], 20.00th=[ 96], 00:08:55.843 | 30.00th=[ 99], 40.00th=[ 102], 50.00th=[ 105], 60.00th=[ 110], 00:08:55.843 | 70.00th=[ 116], 80.00th=[ 123], 90.00th=[ 133], 95.00th=[ 141], 00:08:55.843 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 202], 00:08:55.843 | 99.99th=[ 231] 00:08:55.843 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:08:55.843 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:55.843 lat (usec) : 100=17.78%, 250=81.96%, 500=0.05%, 750=0.03% 00:08:55.843 lat (msec) : 4=0.10%, 10=0.07% 00:08:55.843 cpu : usr=2.00%, sys=9.20%, ctx=5921, majf=0, minf=5 00:08:55.843 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.843 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.843 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.843 issued rwts: total=2849,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.843 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.843 00:08:55.843 Run status group 0 (all jobs): 00:08:55.843 READ: bw=11.1MiB/s (11.7MB/s), 11.1MiB/s-11.1MiB/s (11.7MB/s-11.7MB/s), io=11.1MiB (11.7MB), run=1001-1001msec 00:08:55.843 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:08:55.843 00:08:55.843 Disk stats (read/write): 00:08:55.843 nvme0n1: ios=2610/2700, merge=0/0, ticks=511/340, in_queue=851, util=89.88% 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.843 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.843 rmmod nvme_tcp 00:08:55.843 rmmod nvme_fabrics 00:08:55.843 rmmod nvme_keyring 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:08:55.843 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 65552 ']' 00:08:55.844 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 65552 00:08:55.844 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 65552 ']' 00:08:55.844 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 65552 00:08:55.844 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:08:55.844 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.844 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65552 00:08:56.102 killing process with pid 65552 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65552' 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 65552 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 65552 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:56.102 09:31:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:56.102 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:56.102 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.102 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:56.102 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:56.102 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:56.102 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:56.102 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:08:56.361 00:08:56.361 real 0m5.311s 00:08:56.361 user 0m15.552s 00:08:56.361 sys 0m2.300s 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:56.361 ************************************ 00:08:56.361 END TEST nvmf_nmic 00:08:56.361 ************************************ 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.361 ************************************ 00:08:56.361 START TEST nvmf_fio_target 00:08:56.361 ************************************ 00:08:56.361 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:56.621 * Looking for test storage... 00:08:56.621 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.621 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:56.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.622 --rc genhtml_branch_coverage=1 00:08:56.622 --rc genhtml_function_coverage=1 00:08:56.622 --rc genhtml_legend=1 00:08:56.622 --rc geninfo_all_blocks=1 00:08:56.622 --rc geninfo_unexecuted_blocks=1 00:08:56.622 00:08:56.622 ' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:56.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.622 --rc genhtml_branch_coverage=1 00:08:56.622 --rc genhtml_function_coverage=1 00:08:56.622 --rc genhtml_legend=1 00:08:56.622 --rc geninfo_all_blocks=1 00:08:56.622 --rc geninfo_unexecuted_blocks=1 00:08:56.622 00:08:56.622 ' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:56.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.622 --rc genhtml_branch_coverage=1 00:08:56.622 --rc genhtml_function_coverage=1 00:08:56.622 --rc genhtml_legend=1 00:08:56.622 --rc geninfo_all_blocks=1 00:08:56.622 --rc geninfo_unexecuted_blocks=1 00:08:56.622 00:08:56.622 ' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:56.622 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.622 --rc genhtml_branch_coverage=1 00:08:56.622 --rc genhtml_function_coverage=1 00:08:56.622 --rc genhtml_legend=1 00:08:56.622 --rc geninfo_all_blocks=1 00:08:56.622 --rc geninfo_unexecuted_blocks=1 00:08:56.622 00:08:56.622 ' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.622 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.622 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:56.623 Cannot find device "nvmf_init_br" 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:56.623 Cannot find device "nvmf_init_br2" 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:56.623 Cannot find device "nvmf_tgt_br" 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:56.623 Cannot find device "nvmf_tgt_br2" 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:56.623 Cannot find device "nvmf_init_br" 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:56.623 Cannot find device "nvmf_init_br2" 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:56.623 Cannot find device "nvmf_tgt_br" 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:08:56.623 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:56.882 Cannot find device "nvmf_tgt_br2" 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:56.882 Cannot find device "nvmf_br" 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:56.882 Cannot find device "nvmf_init_if" 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:56.882 Cannot find device "nvmf_init_if2" 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:56.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:56.882 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:56.882 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:57.142 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:57.142 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:08:57.142 00:08:57.142 --- 10.0.0.3 ping statistics --- 00:08:57.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.142 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:57.142 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:57.142 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:08:57.142 00:08:57.142 --- 10.0.0.4 ping statistics --- 00:08:57.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.142 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:57.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:08:57.142 00:08:57.142 --- 10.0.0.1 ping statistics --- 00:08:57.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.142 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:57.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:08:57.142 00:08:57.142 --- 10.0.0.2 ping statistics --- 00:08:57.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.142 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.142 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=65865 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 65865 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 65865 ']' 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.143 09:31:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.143 [2024-11-05 09:31:42.966452] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:08:57.143 [2024-11-05 09:31:42.966550] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.401 [2024-11-05 09:31:43.123985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.401 [2024-11-05 09:31:43.165367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.401 [2024-11-05 09:31:43.165426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.401 [2024-11-05 09:31:43.165440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.401 [2024-11-05 09:31:43.165450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.401 [2024-11-05 09:31:43.165459] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.401 [2024-11-05 09:31:43.166395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.401 [2024-11-05 09:31:43.167291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.401 [2024-11-05 09:31:43.167503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.401 [2024-11-05 09:31:43.167567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.401 [2024-11-05 09:31:43.203595] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:57.401 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.401 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:08:57.401 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:57.401 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:57.401 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:57.401 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.401 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:57.660 [2024-11-05 09:31:43.572091] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:57.660 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.227 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:58.227 09:31:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.486 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:58.486 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:58.745 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:58.745 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.003 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:59.003 09:31:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:59.263 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.522 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:59.522 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:59.781 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:59.781 09:31:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:00.349 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:00.349 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:00.609 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:00.868 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:00.868 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:01.127 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:01.127 09:31:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:01.385 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:01.644 [2024-11-05 09:31:47.487676] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:01.644 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:01.902 09:31:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:02.160 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:09:02.418 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:02.418 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:09:02.418 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:09:02.418 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:09:02.418 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:09:02.418 09:31:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:09:04.323 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:09:04.323 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:09:04.323 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:09:04.323 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:09:04.323 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:09:04.323 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:09:04.323 09:31:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:04.323 [global] 00:09:04.323 thread=1 00:09:04.323 invalidate=1 00:09:04.323 rw=write 00:09:04.323 time_based=1 00:09:04.323 runtime=1 00:09:04.323 ioengine=libaio 00:09:04.323 direct=1 00:09:04.323 bs=4096 00:09:04.323 iodepth=1 00:09:04.323 norandommap=0 00:09:04.323 numjobs=1 00:09:04.323 00:09:04.323 verify_dump=1 00:09:04.323 verify_backlog=512 00:09:04.323 verify_state_save=0 00:09:04.323 do_verify=1 00:09:04.323 verify=crc32c-intel 00:09:04.323 [job0] 00:09:04.323 filename=/dev/nvme0n1 00:09:04.323 [job1] 00:09:04.323 filename=/dev/nvme0n2 00:09:04.323 [job2] 00:09:04.323 filename=/dev/nvme0n3 00:09:04.323 [job3] 00:09:04.323 filename=/dev/nvme0n4 00:09:04.323 Could not set queue depth (nvme0n1) 00:09:04.323 Could not set queue depth (nvme0n2) 00:09:04.323 Could not set queue depth (nvme0n3) 00:09:04.323 Could not set queue depth (nvme0n4) 00:09:04.582 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.582 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.582 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.582 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:04.582 fio-3.35 00:09:04.582 Starting 4 threads 00:09:05.956 00:09:05.956 job0: (groupid=0, jobs=1): err= 0: pid=66053: Tue Nov 5 09:31:51 2024 00:09:05.956 read: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:09:05.956 slat (nsec): min=11687, max=58723, avg=16844.05, stdev=5636.72 00:09:05.956 clat (usec): min=138, max=222, avg=169.76, stdev=11.65 00:09:05.956 lat (usec): min=153, max=242, avg=186.60, stdev=12.95 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:09:05.956 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:09:05.956 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:09:05.956 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 219], 99.95th=[ 221], 00:09:05.956 | 99.99th=[ 223] 00:09:05.956 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:05.956 slat (usec): min=14, max=101, avg=23.45, stdev= 7.21 00:09:05.956 clat (usec): min=100, max=249, avg=129.05, stdev=10.97 00:09:05.956 lat (usec): min=124, max=351, avg=152.49, stdev=13.90 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 109], 5.00th=[ 114], 10.00th=[ 117], 20.00th=[ 121], 00:09:05.956 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 128], 60.00th=[ 131], 00:09:05.956 | 70.00th=[ 135], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:09:05.956 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 190], 00:09:05.956 | 99.99th=[ 249] 00:09:05.956 bw ( KiB/s): min=12288, max=12288, per=25.49%, avg=12288.00, stdev= 0.00, samples=1 00:09:05.956 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:05.956 lat (usec) : 250=100.00% 00:09:05.956 cpu : usr=2.20%, sys=9.90%, ctx=5859, majf=0, minf=3 00:09:05.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 issued rwts: total=2787,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.956 job1: (groupid=0, jobs=1): err= 0: pid=66054: Tue Nov 5 09:31:51 2024 00:09:05.956 read: IOPS=2886, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1001msec) 00:09:05.956 slat (nsec): min=11469, max=59389, avg=16647.60, stdev=6538.42 00:09:05.956 clat (usec): min=135, max=880, avg=167.02, stdev=18.40 00:09:05.956 lat (usec): min=148, max=892, avg=183.67, stdev=20.47 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:09:05.956 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 169], 00:09:05.956 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 184], 95.00th=[ 190], 00:09:05.956 | 99.00th=[ 206], 99.50th=[ 212], 99.90th=[ 227], 99.95th=[ 231], 00:09:05.956 | 99.99th=[ 881] 00:09:05.956 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:05.956 slat (nsec): min=13856, max=99199, avg=21134.15, stdev=5293.47 00:09:05.956 clat (usec): min=97, max=1486, avg=128.24, stdev=28.70 00:09:05.956 lat (usec): min=116, max=1505, avg=149.37, stdev=29.38 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 115], 20.00th=[ 119], 00:09:05.956 | 30.00th=[ 122], 40.00th=[ 125], 50.00th=[ 127], 60.00th=[ 129], 00:09:05.956 | 70.00th=[ 133], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:09:05.956 | 99.00th=[ 169], 99.50th=[ 206], 99.90th=[ 310], 99.95th=[ 363], 00:09:05.956 | 99.99th=[ 1483] 00:09:05.956 bw ( KiB/s): min=12288, max=12288, per=25.49%, avg=12288.00, stdev= 0.00, samples=1 00:09:05.956 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:05.956 lat (usec) : 100=0.08%, 250=99.75%, 500=0.13%, 1000=0.02% 00:09:05.956 lat (msec) : 2=0.02% 00:09:05.956 cpu : usr=2.00%, sys=9.40%, ctx=5961, majf=0, minf=12 00:09:05.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 issued rwts: total=2889,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.956 job2: (groupid=0, jobs=1): err= 0: pid=66055: Tue Nov 5 09:31:51 2024 00:09:05.956 read: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:09:05.956 slat (nsec): min=11026, max=42215, avg=12871.84, stdev=2128.75 00:09:05.956 clat (usec): min=145, max=589, avg=172.05, stdev=13.82 00:09:05.956 lat (usec): min=158, max=617, avg=184.92, stdev=14.24 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:09:05.956 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 174], 00:09:05.956 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 194], 00:09:05.956 | 99.00th=[ 202], 99.50th=[ 208], 99.90th=[ 223], 99.95th=[ 239], 00:09:05.956 | 99.99th=[ 594] 00:09:05.956 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:05.956 slat (nsec): min=13552, max=86506, avg=19246.66, stdev=4091.75 00:09:05.956 clat (usec): min=102, max=2036, avg=135.81, stdev=38.41 00:09:05.956 lat (usec): min=121, max=2056, avg=155.06, stdev=38.78 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:09:05.956 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:09:05.956 | 70.00th=[ 141], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 155], 00:09:05.956 | 99.00th=[ 167], 99.50th=[ 174], 99.90th=[ 404], 99.95th=[ 635], 00:09:05.956 | 99.99th=[ 2040] 00:09:05.956 bw ( KiB/s): min=12288, max=12288, per=25.49%, avg=12288.00, stdev= 0.00, samples=1 00:09:05.956 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:05.956 lat (usec) : 250=99.91%, 500=0.02%, 750=0.05% 00:09:05.956 lat (msec) : 4=0.02% 00:09:05.956 cpu : usr=1.80%, sys=7.80%, ctx=5859, majf=0, minf=11 00:09:05.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 issued rwts: total=2787,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.956 job3: (groupid=0, jobs=1): err= 0: pid=66056: Tue Nov 5 09:31:51 2024 00:09:05.956 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:09:05.956 slat (nsec): min=11323, max=49328, avg=14135.61, stdev=3376.02 00:09:05.956 clat (usec): min=147, max=512, avg=194.36, stdev=31.97 00:09:05.956 lat (usec): min=159, max=552, avg=208.50, stdev=32.47 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 172], 00:09:05.956 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:09:05.956 | 70.00th=[ 196], 80.00th=[ 225], 90.00th=[ 249], 95.00th=[ 262], 00:09:05.956 | 99.00th=[ 281], 99.50th=[ 285], 99.90th=[ 306], 99.95th=[ 318], 00:09:05.956 | 99.99th=[ 515] 00:09:05.956 write: IOPS=2847, BW=11.1MiB/s (11.7MB/s)(11.1MiB/1001msec); 0 zone resets 00:09:05.956 slat (usec): min=13, max=113, avg=21.26, stdev= 5.99 00:09:05.956 clat (usec): min=106, max=295, avg=139.18, stdev=16.53 00:09:05.956 lat (usec): min=124, max=334, avg=160.44, stdev=18.18 00:09:05.956 clat percentiles (usec): 00:09:05.956 | 1.00th=[ 114], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:09:05.956 | 30.00th=[ 131], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:09:05.956 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 159], 95.00th=[ 172], 00:09:05.956 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 239], 99.95th=[ 251], 00:09:05.956 | 99.99th=[ 297] 00:09:05.956 bw ( KiB/s): min=12288, max=12288, per=25.49%, avg=12288.00, stdev= 0.00, samples=1 00:09:05.956 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:05.956 lat (usec) : 250=95.56%, 500=4.42%, 750=0.02% 00:09:05.956 cpu : usr=1.90%, sys=7.90%, ctx=5410, majf=0, minf=14 00:09:05.956 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:05.956 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:05.956 issued rwts: total=2560,2850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:05.956 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:05.956 00:09:05.956 Run status group 0 (all jobs): 00:09:05.956 READ: bw=43.0MiB/s (45.1MB/s), 9.99MiB/s-11.3MiB/s (10.5MB/s-11.8MB/s), io=43.1MiB (45.1MB), run=1001-1001msec 00:09:05.956 WRITE: bw=47.1MiB/s (49.4MB/s), 11.1MiB/s-12.0MiB/s (11.7MB/s-12.6MB/s), io=47.1MiB (49.4MB), run=1001-1001msec 00:09:05.957 00:09:05.957 Disk stats (read/write): 00:09:05.957 nvme0n1: ios=2507/2560, merge=0/0, ticks=447/360, in_queue=807, util=87.68% 00:09:05.957 nvme0n2: ios=2594/2580, merge=0/0, ticks=470/358, in_queue=828, util=88.35% 00:09:05.957 nvme0n3: ios=2470/2560, merge=0/0, ticks=436/364, in_queue=800, util=89.04% 00:09:05.957 nvme0n4: ios=2089/2560, merge=0/0, ticks=415/384, in_queue=799, util=89.79% 00:09:05.957 09:31:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:05.957 [global] 00:09:05.957 thread=1 00:09:05.957 invalidate=1 00:09:05.957 rw=randwrite 00:09:05.957 time_based=1 00:09:05.957 runtime=1 00:09:05.957 ioengine=libaio 00:09:05.957 direct=1 00:09:05.957 bs=4096 00:09:05.957 iodepth=1 00:09:05.957 norandommap=0 00:09:05.957 numjobs=1 00:09:05.957 00:09:05.957 verify_dump=1 00:09:05.957 verify_backlog=512 00:09:05.957 verify_state_save=0 00:09:05.957 do_verify=1 00:09:05.957 verify=crc32c-intel 00:09:05.957 [job0] 00:09:05.957 filename=/dev/nvme0n1 00:09:05.957 [job1] 00:09:05.957 filename=/dev/nvme0n2 00:09:05.957 [job2] 00:09:05.957 filename=/dev/nvme0n3 00:09:05.957 [job3] 00:09:05.957 filename=/dev/nvme0n4 00:09:05.957 Could not set queue depth (nvme0n1) 00:09:05.957 Could not set queue depth (nvme0n2) 00:09:05.957 Could not set queue depth (nvme0n3) 00:09:05.957 Could not set queue depth (nvme0n4) 00:09:05.957 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.957 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.957 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.957 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:05.957 fio-3.35 00:09:05.957 Starting 4 threads 00:09:07.333 00:09:07.333 job0: (groupid=0, jobs=1): err= 0: pid=66114: Tue Nov 5 09:31:52 2024 00:09:07.333 read: IOPS=1810, BW=7241KiB/s (7415kB/s)(7248KiB/1001msec) 00:09:07.333 slat (nsec): min=11505, max=47641, avg=16483.12, stdev=5766.50 00:09:07.334 clat (usec): min=139, max=2148, avg=284.81, stdev=77.90 00:09:07.334 lat (usec): min=153, max=2161, avg=301.29, stdev=79.10 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 151], 5.00th=[ 237], 10.00th=[ 245], 20.00th=[ 255], 00:09:07.334 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:09:07.334 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 322], 95.00th=[ 474], 00:09:07.334 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 799], 99.95th=[ 2147], 00:09:07.334 | 99.99th=[ 2147] 00:09:07.334 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:07.334 slat (usec): min=17, max=111, avg=20.99, stdev= 7.55 00:09:07.334 clat (usec): min=53, max=2105, avg=197.30, stdev=57.16 00:09:07.334 lat (usec): min=113, max=2126, avg=218.29, stdev=56.91 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 106], 5.00th=[ 119], 10.00th=[ 137], 20.00th=[ 188], 00:09:07.334 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 204], 00:09:07.334 | 70.00th=[ 210], 80.00th=[ 215], 90.00th=[ 225], 95.00th=[ 239], 00:09:07.334 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 478], 99.95th=[ 963], 00:09:07.334 | 99.99th=[ 2114] 00:09:07.334 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:07.334 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:07.334 lat (usec) : 100=0.16%, 250=58.50%, 500=39.84%, 750=1.40%, 1000=0.05% 00:09:07.334 lat (msec) : 4=0.05% 00:09:07.334 cpu : usr=1.70%, sys=5.60%, ctx=3880, majf=0, minf=13 00:09:07.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 issued rwts: total=1812,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.334 job1: (groupid=0, jobs=1): err= 0: pid=66115: Tue Nov 5 09:31:52 2024 00:09:07.334 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:09:07.334 slat (nsec): min=11146, max=33437, avg=12903.32, stdev=1909.04 00:09:07.334 clat (usec): min=134, max=681, avg=166.41, stdev=24.80 00:09:07.334 lat (usec): min=147, max=692, avg=179.31, stdev=24.96 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:09:07.334 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:09:07.334 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 188], 95.00th=[ 229], 00:09:07.334 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 273], 99.95th=[ 281], 00:09:07.334 | 99.99th=[ 685] 00:09:07.334 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:09:07.334 slat (nsec): min=13342, max=90032, avg=19061.29, stdev=3907.45 00:09:07.334 clat (usec): min=88, max=516, avg=124.41, stdev=18.50 00:09:07.334 lat (usec): min=106, max=535, avg=143.47, stdev=19.40 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 97], 5.00th=[ 103], 10.00th=[ 106], 20.00th=[ 112], 00:09:07.334 | 30.00th=[ 115], 40.00th=[ 119], 50.00th=[ 122], 60.00th=[ 126], 00:09:07.334 | 70.00th=[ 130], 80.00th=[ 135], 90.00th=[ 147], 95.00th=[ 157], 00:09:07.334 | 99.00th=[ 180], 99.50th=[ 188], 99.90th=[ 233], 99.95th=[ 297], 00:09:07.334 | 99.99th=[ 519] 00:09:07.334 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:07.334 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:07.334 lat (usec) : 100=1.27%, 250=98.06%, 500=0.64%, 750=0.03% 00:09:07.334 cpu : usr=1.20%, sys=8.80%, ctx=6146, majf=0, minf=15 00:09:07.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 issued rwts: total=3069,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.334 job2: (groupid=0, jobs=1): err= 0: pid=66116: Tue Nov 5 09:31:52 2024 00:09:07.334 read: IOPS=1764, BW=7057KiB/s (7226kB/s)(7064KiB/1001msec) 00:09:07.334 slat (nsec): min=11118, max=39915, avg=13603.93, stdev=2878.84 00:09:07.334 clat (usec): min=150, max=6171, avg=287.04, stdev=172.78 00:09:07.334 lat (usec): min=162, max=6183, avg=300.64, stdev=173.12 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 212], 5.00th=[ 249], 10.00th=[ 255], 20.00th=[ 262], 00:09:07.334 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 277], 00:09:07.334 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 347], 00:09:07.334 | 99.00th=[ 461], 99.50th=[ 506], 99.90th=[ 3851], 99.95th=[ 6194], 00:09:07.334 | 99.99th=[ 6194] 00:09:07.334 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:07.334 slat (usec): min=16, max=953, avg=21.30, stdev=22.59 00:09:07.334 clat (usec): min=2, max=2807, avg=204.59, stdev=82.71 00:09:07.334 lat (usec): min=129, max=2837, avg=225.89, stdev=85.69 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 128], 5.00th=[ 137], 10.00th=[ 151], 20.00th=[ 186], 00:09:07.334 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:09:07.334 | 70.00th=[ 210], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 245], 00:09:07.334 | 99.00th=[ 379], 99.50th=[ 502], 99.90th=[ 1156], 99.95th=[ 1532], 00:09:07.334 | 99.99th=[ 2802] 00:09:07.334 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:09:07.334 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:07.334 lat (usec) : 4=0.05%, 250=54.25%, 500=45.15%, 750=0.34%, 1000=0.05% 00:09:07.334 lat (msec) : 2=0.08%, 4=0.05%, 10=0.03% 00:09:07.334 cpu : usr=1.30%, sys=5.50%, ctx=3820, majf=0, minf=13 00:09:07.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 issued rwts: total=1766,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.334 job3: (groupid=0, jobs=1): err= 0: pid=66117: Tue Nov 5 09:31:52 2024 00:09:07.334 read: IOPS=3002, BW=11.7MiB/s (12.3MB/s)(11.7MiB/1000msec) 00:09:07.334 slat (nsec): min=10843, max=32954, avg=12246.41, stdev=1599.40 00:09:07.334 clat (usec): min=140, max=332, avg=165.74, stdev=12.07 00:09:07.334 lat (usec): min=151, max=344, avg=177.98, stdev=12.23 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:09:07.334 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:09:07.334 | 70.00th=[ 172], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 186], 00:09:07.334 | 99.00th=[ 200], 99.50th=[ 204], 99.90th=[ 229], 99.95th=[ 285], 00:09:07.334 | 99.99th=[ 334] 00:09:07.334 write: IOPS=3072, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1000msec); 0 zone resets 00:09:07.334 slat (nsec): min=13802, max=92845, avg=18560.83, stdev=3501.45 00:09:07.334 clat (usec): min=76, max=1527, avg=129.99, stdev=27.94 00:09:07.334 lat (usec): min=117, max=1546, avg=148.55, stdev=28.23 00:09:07.334 clat percentiles (usec): 00:09:07.334 | 1.00th=[ 108], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 119], 00:09:07.334 | 30.00th=[ 123], 40.00th=[ 126], 50.00th=[ 129], 60.00th=[ 133], 00:09:07.334 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 145], 95.00th=[ 151], 00:09:07.334 | 99.00th=[ 161], 99.50th=[ 165], 99.90th=[ 180], 99.95th=[ 245], 00:09:07.334 | 99.99th=[ 1532] 00:09:07.334 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:09:07.334 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:09:07.334 lat (usec) : 100=0.02%, 250=99.92%, 500=0.05% 00:09:07.334 lat (msec) : 2=0.02% 00:09:07.334 cpu : usr=1.60%, sys=8.10%, ctx=6076, majf=0, minf=7 00:09:07.334 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:07.334 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:07.334 issued rwts: total=3002,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:07.334 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:07.334 00:09:07.334 Run status group 0 (all jobs): 00:09:07.334 READ: bw=37.7MiB/s (39.5MB/s), 7057KiB/s-12.0MiB/s (7226kB/s-12.6MB/s), io=37.7MiB (39.5MB), run=1000-1001msec 00:09:07.334 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1000-1001msec 00:09:07.334 00:09:07.334 Disk stats (read/write): 00:09:07.334 nvme0n1: ios=1586/1848, merge=0/0, ticks=437/371, in_queue=808, util=87.07% 00:09:07.334 nvme0n2: ios=2589/2688, merge=0/0, ticks=474/354, in_queue=828, util=88.21% 00:09:07.334 nvme0n3: ios=1536/1669, merge=0/0, ticks=439/358, in_queue=797, util=88.67% 00:09:07.334 nvme0n4: ios=2560/2633, merge=0/0, ticks=430/361, in_queue=791, util=89.76% 00:09:07.334 09:31:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:07.334 [global] 00:09:07.334 thread=1 00:09:07.334 invalidate=1 00:09:07.334 rw=write 00:09:07.334 time_based=1 00:09:07.334 runtime=1 00:09:07.334 ioengine=libaio 00:09:07.334 direct=1 00:09:07.334 bs=4096 00:09:07.334 iodepth=128 00:09:07.334 norandommap=0 00:09:07.334 numjobs=1 00:09:07.334 00:09:07.334 verify_dump=1 00:09:07.334 verify_backlog=512 00:09:07.334 verify_state_save=0 00:09:07.334 do_verify=1 00:09:07.334 verify=crc32c-intel 00:09:07.334 [job0] 00:09:07.334 filename=/dev/nvme0n1 00:09:07.334 [job1] 00:09:07.334 filename=/dev/nvme0n2 00:09:07.334 [job2] 00:09:07.334 filename=/dev/nvme0n3 00:09:07.334 [job3] 00:09:07.334 filename=/dev/nvme0n4 00:09:07.334 Could not set queue depth (nvme0n1) 00:09:07.334 Could not set queue depth (nvme0n2) 00:09:07.334 Could not set queue depth (nvme0n3) 00:09:07.334 Could not set queue depth (nvme0n4) 00:09:07.334 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.334 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.334 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.334 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:07.334 fio-3.35 00:09:07.334 Starting 4 threads 00:09:08.712 00:09:08.712 job0: (groupid=0, jobs=1): err= 0: pid=66172: Tue Nov 5 09:31:54 2024 00:09:08.712 read: IOPS=5280, BW=20.6MiB/s (21.6MB/s)(20.8MiB/1006msec) 00:09:08.712 slat (usec): min=5, max=4467, avg=90.42, stdev=400.95 00:09:08.712 clat (usec): min=1858, max=16839, avg=11997.98, stdev=1116.55 00:09:08.712 lat (usec): min=5668, max=16869, avg=12088.40, stdev=1123.10 00:09:08.712 clat percentiles (usec): 00:09:08.712 | 1.00th=[ 6915], 5.00th=[10290], 10.00th=[10945], 20.00th=[11469], 00:09:08.712 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:09:08.712 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13698], 00:09:08.712 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15795], 99.95th=[16319], 00:09:08.712 | 99.99th=[16909] 00:09:08.712 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:09:08.712 slat (usec): min=9, max=4912, avg=85.07, stdev=494.32 00:09:08.712 clat (usec): min=6075, max=16431, avg=11276.16, stdev=1027.46 00:09:08.712 lat (usec): min=6115, max=16475, avg=11361.23, stdev=1126.13 00:09:08.712 clat percentiles (usec): 00:09:08.712 | 1.00th=[ 7701], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10814], 00:09:08.712 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11338], 00:09:08.712 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12780], 00:09:08.712 | 99.00th=[15008], 99.50th=[15533], 99.90th=[16188], 99.95th=[16188], 00:09:08.712 | 99.99th=[16450] 00:09:08.712 bw ( KiB/s): min=22192, max=22864, per=35.34%, avg=22528.00, stdev=475.18, samples=2 00:09:08.712 iops : min= 5548, max= 5716, avg=5632.00, stdev=118.79, samples=2 00:09:08.712 lat (msec) : 2=0.01%, 10=4.71%, 20=95.28% 00:09:08.712 cpu : usr=5.27%, sys=14.03%, ctx=334, majf=0, minf=17 00:09:08.712 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:08.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.712 issued rwts: total=5312,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.712 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.712 job1: (groupid=0, jobs=1): err= 0: pid=66173: Tue Nov 5 09:31:54 2024 00:09:08.712 read: IOPS=2583, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1009msec) 00:09:08.712 slat (usec): min=6, max=8862, avg=162.03, stdev=721.91 00:09:08.712 clat (usec): min=5407, max=42788, avg=19644.79, stdev=4908.08 00:09:08.712 lat (usec): min=10306, max=42821, avg=19806.82, stdev=4953.21 00:09:08.713 clat percentiles (usec): 00:09:08.713 | 1.00th=[12125], 5.00th=[14746], 10.00th=[15270], 20.00th=[16909], 00:09:08.713 | 30.00th=[17171], 40.00th=[17433], 50.00th=[17433], 60.00th=[17957], 00:09:08.713 | 70.00th=[19792], 80.00th=[23987], 90.00th=[25297], 95.00th=[28967], 00:09:08.713 | 99.00th=[34866], 99.50th=[40109], 99.90th=[42730], 99.95th=[42730], 00:09:08.713 | 99.99th=[42730] 00:09:08.713 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:09:08.713 slat (usec): min=10, max=8558, avg=180.61, stdev=812.07 00:09:08.713 clat (usec): min=8234, max=78648, avg=24778.37, stdev=16376.22 00:09:08.713 lat (usec): min=8276, max=78674, avg=24958.97, stdev=16491.50 00:09:08.713 clat percentiles (usec): 00:09:08.713 | 1.00th=[10945], 5.00th=[11731], 10.00th=[12256], 20.00th=[12649], 00:09:08.713 | 30.00th=[12911], 40.00th=[14484], 50.00th=[17171], 60.00th=[19792], 00:09:08.713 | 70.00th=[27919], 80.00th=[38536], 90.00th=[51643], 95.00th=[62129], 00:09:08.713 | 99.00th=[70779], 99.50th=[72877], 99.90th=[78119], 99.95th=[78119], 00:09:08.713 | 99.99th=[79168] 00:09:08.713 bw ( KiB/s): min=10304, max=13596, per=18.75%, avg=11950.00, stdev=2327.80, samples=2 00:09:08.713 iops : min= 2576, max= 3399, avg=2987.50, stdev=581.95, samples=2 00:09:08.713 lat (msec) : 10=0.05%, 20=67.72%, 50=26.64%, 100=5.58% 00:09:08.713 cpu : usr=2.88%, sys=8.63%, ctx=269, majf=0, minf=13 00:09:08.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:08.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.713 issued rwts: total=2607,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.713 job2: (groupid=0, jobs=1): err= 0: pid=66174: Tue Nov 5 09:31:54 2024 00:09:08.713 read: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec) 00:09:08.713 slat (usec): min=5, max=4277, avg=102.81, stdev=406.85 00:09:08.713 clat (usec): min=9679, max=17798, avg=13425.66, stdev=1160.15 00:09:08.713 lat (usec): min=9709, max=19295, avg=13528.47, stdev=1204.79 00:09:08.713 clat percentiles (usec): 00:09:08.713 | 1.00th=[10290], 5.00th=[11207], 10.00th=[12256], 20.00th=[12911], 00:09:08.713 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:09:08.713 | 70.00th=[13698], 80.00th=[14091], 90.00th=[15008], 95.00th=[15533], 00:09:08.713 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17433], 99.95th=[17695], 00:09:08.713 | 99.99th=[17695] 00:09:08.713 write: IOPS=4806, BW=18.8MiB/s (19.7MB/s)(18.8MiB/1002msec); 0 zone resets 00:09:08.713 slat (usec): min=11, max=4986, avg=101.42, stdev=436.50 00:09:08.713 clat (usec): min=288, max=20082, avg=13440.57, stdev=1816.66 00:09:08.713 lat (usec): min=2903, max=20133, avg=13541.99, stdev=1856.20 00:09:08.713 clat percentiles (usec): 00:09:08.713 | 1.00th=[ 7242], 5.00th=[11731], 10.00th=[12256], 20.00th=[12518], 00:09:08.713 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13042], 60.00th=[13304], 00:09:08.713 | 70.00th=[13960], 80.00th=[14746], 90.00th=[15926], 95.00th=[16581], 00:09:08.713 | 99.00th=[17433], 99.50th=[17695], 99.90th=[19268], 99.95th=[19792], 00:09:08.713 | 99.99th=[20055] 00:09:08.713 bw ( KiB/s): min=20480, max=20480, per=32.13%, avg=20480.00, stdev= 0.00, samples=1 00:09:08.713 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:09:08.713 lat (usec) : 500=0.01% 00:09:08.713 lat (msec) : 4=0.44%, 10=0.81%, 20=98.74%, 50=0.01% 00:09:08.713 cpu : usr=4.10%, sys=13.79%, ctx=510, majf=0, minf=7 00:09:08.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:08.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.713 issued rwts: total=4608,4816,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.713 job3: (groupid=0, jobs=1): err= 0: pid=66175: Tue Nov 5 09:31:54 2024 00:09:08.713 read: IOPS=2106, BW=8425KiB/s (8627kB/s)(8484KiB/1007msec) 00:09:08.713 slat (usec): min=5, max=8181, avg=173.60, stdev=783.22 00:09:08.713 clat (usec): min=4769, max=54823, avg=22851.10, stdev=7100.08 00:09:08.713 lat (usec): min=9332, max=55982, avg=23024.70, stdev=7157.33 00:09:08.713 clat percentiles (usec): 00:09:08.713 | 1.00th=[ 9634], 5.00th=[15795], 10.00th=[17433], 20.00th=[18220], 00:09:08.713 | 30.00th=[18482], 40.00th=[19530], 50.00th=[21365], 60.00th=[23987], 00:09:08.713 | 70.00th=[24511], 80.00th=[25297], 90.00th=[28705], 95.00th=[39584], 00:09:08.713 | 99.00th=[51119], 99.50th=[52691], 99.90th=[54789], 99.95th=[54789], 00:09:08.713 | 99.99th=[54789] 00:09:08.713 write: IOPS=2542, BW=9.93MiB/s (10.4MB/s)(10.0MiB/1007msec); 0 zone resets 00:09:08.713 slat (usec): min=9, max=8924, avg=239.36, stdev=967.29 00:09:08.713 clat (usec): min=13224, max=74854, avg=30524.15, stdev=17563.40 00:09:08.713 lat (usec): min=13246, max=74878, avg=30763.52, stdev=17686.17 00:09:08.713 clat percentiles (usec): 00:09:08.713 | 1.00th=[13960], 5.00th=[14484], 10.00th=[14615], 20.00th=[15139], 00:09:08.713 | 30.00th=[16319], 40.00th=[19792], 50.00th=[25822], 60.00th=[28705], 00:09:08.713 | 70.00th=[34866], 80.00th=[46400], 90.00th=[59507], 95.00th=[70779], 00:09:08.713 | 99.00th=[74974], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:09:08.713 | 99.99th=[74974] 00:09:08.713 bw ( KiB/s): min= 8192, max=11848, per=15.72%, avg=10020.00, stdev=2585.18, samples=2 00:09:08.713 iops : min= 2048, max= 2962, avg=2505.00, stdev=646.30, samples=2 00:09:08.713 lat (msec) : 10=0.66%, 20=41.36%, 50=49.54%, 100=8.44% 00:09:08.713 cpu : usr=2.58%, sys=7.06%, ctx=265, majf=0, minf=13 00:09:08.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:09:08.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:08.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:08.713 issued rwts: total=2121,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:08.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:08.713 00:09:08.713 Run status group 0 (all jobs): 00:09:08.713 READ: bw=56.7MiB/s (59.5MB/s), 8425KiB/s-20.6MiB/s (8627kB/s-21.6MB/s), io=57.2MiB (60.0MB), run=1002-1009msec 00:09:08.713 WRITE: bw=62.3MiB/s (65.3MB/s), 9.93MiB/s-21.9MiB/s (10.4MB/s-22.9MB/s), io=62.8MiB (65.9MB), run=1002-1009msec 00:09:08.713 00:09:08.713 Disk stats (read/write): 00:09:08.713 nvme0n1: ios=4658/4668, merge=0/0, ticks=26696/21687, in_queue=48383, util=87.06% 00:09:08.713 nvme0n2: ios=2609/2727, merge=0/0, ticks=24888/25132, in_queue=50020, util=88.34% 00:09:08.713 nvme0n3: ios=3878/4096, merge=0/0, ticks=16731/16070, in_queue=32801, util=89.06% 00:09:08.713 nvme0n4: ios=1536/2031, merge=0/0, ticks=11663/22583, in_queue=34246, util=89.73% 00:09:08.713 09:31:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:08.713 [global] 00:09:08.713 thread=1 00:09:08.713 invalidate=1 00:09:08.713 rw=randwrite 00:09:08.713 time_based=1 00:09:08.713 runtime=1 00:09:08.713 ioengine=libaio 00:09:08.713 direct=1 00:09:08.713 bs=4096 00:09:08.713 iodepth=128 00:09:08.713 norandommap=0 00:09:08.713 numjobs=1 00:09:08.713 00:09:08.713 verify_dump=1 00:09:08.713 verify_backlog=512 00:09:08.713 verify_state_save=0 00:09:08.713 do_verify=1 00:09:08.713 verify=crc32c-intel 00:09:08.713 [job0] 00:09:08.713 filename=/dev/nvme0n1 00:09:08.713 [job1] 00:09:08.713 filename=/dev/nvme0n2 00:09:08.713 [job2] 00:09:08.713 filename=/dev/nvme0n3 00:09:08.713 [job3] 00:09:08.713 filename=/dev/nvme0n4 00:09:08.713 Could not set queue depth (nvme0n1) 00:09:08.713 Could not set queue depth (nvme0n2) 00:09:08.713 Could not set queue depth (nvme0n3) 00:09:08.713 Could not set queue depth (nvme0n4) 00:09:08.713 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.713 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.713 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.713 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:08.713 fio-3.35 00:09:08.713 Starting 4 threads 00:09:10.088 00:09:10.088 job0: (groupid=0, jobs=1): err= 0: pid=66234: Tue Nov 5 09:31:55 2024 00:09:10.088 read: IOPS=2617, BW=10.2MiB/s (10.7MB/s)(10.3MiB/1003msec) 00:09:10.088 slat (usec): min=6, max=13701, avg=203.75, stdev=1135.85 00:09:10.088 clat (usec): min=601, max=50750, avg=26320.20, stdev=9600.38 00:09:10.088 lat (usec): min=6969, max=50763, avg=26523.95, stdev=9601.19 00:09:10.088 clat percentiles (usec): 00:09:10.088 | 1.00th=[ 7439], 5.00th=[16057], 10.00th=[17957], 20.00th=[18482], 00:09:10.088 | 30.00th=[19006], 40.00th=[19792], 50.00th=[24511], 60.00th=[28967], 00:09:10.088 | 70.00th=[29230], 80.00th=[32900], 90.00th=[42206], 95.00th=[46400], 00:09:10.088 | 99.00th=[50594], 99.50th=[50594], 99.90th=[50594], 99.95th=[50594], 00:09:10.088 | 99.99th=[50594] 00:09:10.088 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:09:10.088 slat (usec): min=9, max=12834, avg=144.30, stdev=729.71 00:09:10.088 clat (usec): min=11295, max=28672, avg=18646.18, stdev=4456.23 00:09:10.088 lat (usec): min=13940, max=28696, avg=18790.49, stdev=4432.64 00:09:10.088 clat percentiles (usec): 00:09:10.088 | 1.00th=[12256], 5.00th=[14222], 10.00th=[14353], 20.00th=[14746], 00:09:10.088 | 30.00th=[15008], 40.00th=[16188], 50.00th=[17171], 60.00th=[19792], 00:09:10.088 | 70.00th=[20055], 80.00th=[21627], 90.00th=[27395], 95.00th=[27919], 00:09:10.088 | 99.00th=[28705], 99.50th=[28705], 99.90th=[28705], 99.95th=[28705], 00:09:10.088 | 99.99th=[28705] 00:09:10.088 bw ( KiB/s): min= 8456, max=15616, per=18.47%, avg=12036.00, stdev=5062.88, samples=2 00:09:10.088 iops : min= 2114, max= 3904, avg=3009.00, stdev=1265.72, samples=2 00:09:10.088 lat (usec) : 750=0.02% 00:09:10.088 lat (msec) : 10=0.56%, 20=54.31%, 50=44.57%, 100=0.54% 00:09:10.088 cpu : usr=2.50%, sys=8.28%, ctx=180, majf=0, minf=19 00:09:10.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:10.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.088 issued rwts: total=2625,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.088 job1: (groupid=0, jobs=1): err= 0: pid=66235: Tue Nov 5 09:31:55 2024 00:09:10.088 read: IOPS=5142, BW=20.1MiB/s (21.1MB/s)(20.1MiB/1002msec) 00:09:10.088 slat (usec): min=5, max=5553, avg=90.18, stdev=426.24 00:09:10.088 clat (usec): min=282, max=15123, avg=11938.90, stdev=1010.81 00:09:10.088 lat (usec): min=2681, max=15153, avg=12029.08, stdev=918.52 00:09:10.088 clat percentiles (usec): 00:09:10.088 | 1.00th=[ 9372], 5.00th=[11207], 10.00th=[11600], 20.00th=[11731], 00:09:10.088 | 30.00th=[11863], 40.00th=[11863], 50.00th=[11994], 60.00th=[11994], 00:09:10.088 | 70.00th=[12125], 80.00th=[12256], 90.00th=[12387], 95.00th=[12649], 00:09:10.088 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15139], 99.95th=[15139], 00:09:10.088 | 99.99th=[15139] 00:09:10.088 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:10.088 slat (usec): min=10, max=2713, avg=87.91, stdev=374.87 00:09:10.088 clat (usec): min=5269, max=12568, avg=11553.39, stdev=706.96 00:09:10.088 lat (usec): min=5290, max=12588, avg=11641.30, stdev=601.94 00:09:10.088 clat percentiles (usec): 00:09:10.088 | 1.00th=[ 8717], 5.00th=[10945], 10.00th=[11207], 20.00th=[11338], 00:09:10.088 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11600], 60.00th=[11731], 00:09:10.088 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12125], 95.00th=[12256], 00:09:10.088 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12518], 99.95th=[12518], 00:09:10.088 | 99.99th=[12518] 00:09:10.088 bw ( KiB/s): min=21555, max=22784, per=34.03%, avg=22169.50, stdev=869.03, samples=2 00:09:10.088 iops : min= 5388, max= 5696, avg=5542.00, stdev=217.79, samples=2 00:09:10.088 lat (usec) : 500=0.01% 00:09:10.088 lat (msec) : 4=0.30%, 10=3.47%, 20=96.23% 00:09:10.088 cpu : usr=5.29%, sys=13.49%, ctx=339, majf=0, minf=15 00:09:10.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:10.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.088 issued rwts: total=5153,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.088 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.088 job2: (groupid=0, jobs=1): err= 0: pid=66236: Tue Nov 5 09:31:55 2024 00:09:10.088 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:09:10.088 slat (usec): min=6, max=4071, avg=101.20, stdev=402.71 00:09:10.088 clat (usec): min=9828, max=17329, avg=13307.84, stdev=969.84 00:09:10.088 lat (usec): min=9869, max=17373, avg=13409.04, stdev=1022.39 00:09:10.088 clat percentiles (usec): 00:09:10.088 | 1.00th=[10814], 5.00th=[11731], 10.00th=[12387], 20.00th=[12780], 00:09:10.088 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13304], 00:09:10.088 | 70.00th=[13435], 80.00th=[13698], 90.00th=[14746], 95.00th=[15270], 00:09:10.088 | 99.00th=[16188], 99.50th=[16450], 99.90th=[17171], 99.95th=[17171], 00:09:10.088 | 99.99th=[17433] 00:09:10.088 write: IOPS=5067, BW=19.8MiB/s (20.8MB/s)(19.8MiB/1001msec); 0 zone resets 00:09:10.088 slat (usec): min=9, max=3833, avg=97.52, stdev=429.72 00:09:10.088 clat (usec): min=286, max=17360, avg=12839.31, stdev=1354.59 00:09:10.088 lat (usec): min=3681, max=17384, avg=12936.83, stdev=1405.94 00:09:10.088 clat percentiles (usec): 00:09:10.088 | 1.00th=[ 7767], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:09:10.088 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:09:10.088 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13960], 95.00th=[15139], 00:09:10.089 | 99.00th=[16581], 99.50th=[16712], 99.90th=[16909], 99.95th=[17171], 00:09:10.089 | 99.99th=[17433] 00:09:10.089 bw ( KiB/s): min=19088, max=20521, per=30.40%, avg=19804.50, stdev=1013.28, samples=2 00:09:10.089 iops : min= 4772, max= 5130, avg=4951.00, stdev=253.14, samples=2 00:09:10.089 lat (usec) : 500=0.01% 00:09:10.089 lat (msec) : 4=0.18%, 10=0.82%, 20=99.00% 00:09:10.089 cpu : usr=4.40%, sys=14.00%, ctx=456, majf=0, minf=10 00:09:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.089 issued rwts: total=4608,5073,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.089 job3: (groupid=0, jobs=1): err= 0: pid=66237: Tue Nov 5 09:31:55 2024 00:09:10.089 read: IOPS=2083, BW=8335KiB/s (8535kB/s)(8360KiB/1003msec) 00:09:10.089 slat (usec): min=8, max=6706, avg=201.27, stdev=863.21 00:09:10.089 clat (usec): min=1107, max=43705, avg=25057.49, stdev=5138.92 00:09:10.089 lat (usec): min=7338, max=44650, avg=25258.76, stdev=5185.63 00:09:10.089 clat percentiles (usec): 00:09:10.089 | 1.00th=[11994], 5.00th=[18220], 10.00th=[20317], 20.00th=[21627], 00:09:10.089 | 30.00th=[21890], 40.00th=[22152], 50.00th=[22676], 60.00th=[25822], 00:09:10.089 | 70.00th=[28181], 80.00th=[30278], 90.00th=[31589], 95.00th=[34341], 00:09:10.089 | 99.00th=[38011], 99.50th=[40109], 99.90th=[43779], 99.95th=[43779], 00:09:10.089 | 99.99th=[43779] 00:09:10.089 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:09:10.089 slat (usec): min=14, max=8321, avg=218.42, stdev=843.74 00:09:10.089 clat (usec): min=13949, max=59891, avg=28795.72, stdev=12606.48 00:09:10.089 lat (usec): min=13975, max=59915, avg=29014.14, stdev=12697.73 00:09:10.089 clat percentiles (usec): 00:09:10.089 | 1.00th=[15139], 5.00th=[15795], 10.00th=[16188], 20.00th=[17957], 00:09:10.089 | 30.00th=[19006], 40.00th=[20841], 50.00th=[21365], 60.00th=[28705], 00:09:10.089 | 70.00th=[37487], 80.00th=[41157], 90.00th=[48497], 95.00th=[51643], 00:09:10.089 | 99.00th=[56361], 99.50th=[58459], 99.90th=[60031], 99.95th=[60031], 00:09:10.089 | 99.99th=[60031] 00:09:10.089 bw ( KiB/s): min= 7504, max=12312, per=15.21%, avg=9908.00, stdev=3399.77, samples=2 00:09:10.089 iops : min= 1876, max= 3078, avg=2477.00, stdev=849.94, samples=2 00:09:10.089 lat (msec) : 2=0.02%, 10=0.34%, 20=23.78%, 50=71.94%, 100=3.91% 00:09:10.089 cpu : usr=1.80%, sys=8.38%, ctx=262, majf=0, minf=7 00:09:10.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:09:10.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:10.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:10.089 issued rwts: total=2090,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:10.089 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:10.089 00:09:10.089 Run status group 0 (all jobs): 00:09:10.089 READ: bw=56.4MiB/s (59.1MB/s), 8335KiB/s-20.1MiB/s (8535kB/s-21.1MB/s), io=56.5MiB (59.3MB), run=1001-1003msec 00:09:10.089 WRITE: bw=63.6MiB/s (66.7MB/s), 9.97MiB/s-22.0MiB/s (10.5MB/s-23.0MB/s), io=63.8MiB (66.9MB), run=1001-1003msec 00:09:10.089 00:09:10.089 Disk stats (read/write): 00:09:10.089 nvme0n1: ios=2130/2560, merge=0/0, ticks=14392/11083, in_queue=25475, util=87.47% 00:09:10.089 nvme0n2: ios=4657/4672, merge=0/0, ticks=12524/11277, in_queue=23801, util=88.68% 00:09:10.089 nvme0n3: ios=4113/4228, merge=0/0, ticks=17337/15523, in_queue=32860, util=89.08% 00:09:10.089 nvme0n4: ios=2048/2175, merge=0/0, ticks=16747/17201, in_queue=33948, util=89.33% 00:09:10.089 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:10.089 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=66250 00:09:10.089 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:10.089 09:31:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:10.089 [global] 00:09:10.089 thread=1 00:09:10.089 invalidate=1 00:09:10.089 rw=read 00:09:10.089 time_based=1 00:09:10.089 runtime=10 00:09:10.089 ioengine=libaio 00:09:10.089 direct=1 00:09:10.089 bs=4096 00:09:10.089 iodepth=1 00:09:10.089 norandommap=1 00:09:10.089 numjobs=1 00:09:10.089 00:09:10.089 [job0] 00:09:10.089 filename=/dev/nvme0n1 00:09:10.089 [job1] 00:09:10.089 filename=/dev/nvme0n2 00:09:10.089 [job2] 00:09:10.089 filename=/dev/nvme0n3 00:09:10.089 [job3] 00:09:10.089 filename=/dev/nvme0n4 00:09:10.089 Could not set queue depth (nvme0n1) 00:09:10.089 Could not set queue depth (nvme0n2) 00:09:10.089 Could not set queue depth (nvme0n3) 00:09:10.089 Could not set queue depth (nvme0n4) 00:09:10.089 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.089 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.089 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.089 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:10.089 fio-3.35 00:09:10.089 Starting 4 threads 00:09:13.373 09:31:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:13.373 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=60190720, buflen=4096 00:09:13.373 fio: pid=66297, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:13.373 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:13.631 fio: pid=66296, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:13.631 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=68939776, buflen=4096 00:09:13.631 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:13.631 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:13.887 fio: pid=66294, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:13.887 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58089472, buflen=4096 00:09:13.887 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:13.887 09:31:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:14.145 fio: pid=66295, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:14.145 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=61988864, buflen=4096 00:09:14.145 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.145 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:14.145 00:09:14.145 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66294: Tue Nov 5 09:32:00 2024 00:09:14.145 read: IOPS=3885, BW=15.2MiB/s (15.9MB/s)(55.4MiB/3650msec) 00:09:14.145 slat (usec): min=7, max=13725, avg=17.20, stdev=201.64 00:09:14.145 clat (usec): min=59, max=3828, avg=238.93, stdev=75.34 00:09:14.145 lat (usec): min=140, max=13990, avg=256.12, stdev=215.14 00:09:14.145 clat percentiles (usec): 00:09:14.145 | 1.00th=[ 141], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 176], 00:09:14.145 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:09:14.145 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 293], 00:09:14.145 | 99.00th=[ 404], 99.50th=[ 424], 99.90th=[ 660], 99.95th=[ 1385], 00:09:14.145 | 99.99th=[ 3621] 00:09:14.146 bw ( KiB/s): min=13368, max=20052, per=24.87%, avg=15342.29, stdev=2164.73, samples=7 00:09:14.146 iops : min= 3342, max= 5013, avg=3835.57, stdev=541.18, samples=7 00:09:14.146 lat (usec) : 100=0.01%, 250=52.08%, 500=47.73%, 750=0.08%, 1000=0.01% 00:09:14.146 lat (msec) : 2=0.05%, 4=0.03% 00:09:14.146 cpu : usr=1.18%, sys=4.63%, ctx=14192, majf=0, minf=1 00:09:14.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 issued rwts: total=14183,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.146 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66295: Tue Nov 5 09:32:00 2024 00:09:14.146 read: IOPS=3836, BW=15.0MiB/s (15.7MB/s)(59.1MiB/3945msec) 00:09:14.146 slat (usec): min=7, max=15451, avg=18.17, stdev=207.00 00:09:14.146 clat (nsec): min=1451, max=17282k, avg=240925.21, stdev=165294.85 00:09:14.146 lat (usec): min=136, max=17294, avg=259.10, stdev=266.29 00:09:14.146 clat percentiles (usec): 00:09:14.146 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 178], 00:09:14.146 | 30.00th=[ 233], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:09:14.146 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 306], 00:09:14.146 | 99.00th=[ 408], 99.50th=[ 429], 99.90th=[ 693], 99.95th=[ 1418], 00:09:14.146 | 99.99th=[ 6456] 00:09:14.146 bw ( KiB/s): min=12960, max=16933, per=23.77%, avg=14664.71, stdev=1330.19, samples=7 00:09:14.146 iops : min= 3240, max= 4233, avg=3666.14, stdev=332.48, samples=7 00:09:14.146 lat (usec) : 2=0.01%, 4=0.01%, 100=0.01%, 250=52.91%, 500=46.81% 00:09:14.146 lat (usec) : 750=0.18%, 1000=0.01% 00:09:14.146 lat (msec) : 2=0.03%, 4=0.02%, 10=0.01%, 20=0.01% 00:09:14.146 cpu : usr=1.37%, sys=4.84%, ctx=15193, majf=0, minf=2 00:09:14.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 issued rwts: total=15135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.146 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66296: Tue Nov 5 09:32:00 2024 00:09:14.146 read: IOPS=4959, BW=19.4MiB/s (20.3MB/s)(65.7MiB/3394msec) 00:09:14.146 slat (usec): min=7, max=10428, avg=14.63, stdev=100.28 00:09:14.146 clat (usec): min=3, max=4137, avg=185.74, stdev=49.87 00:09:14.146 lat (usec): min=156, max=10673, avg=200.36, stdev=112.54 00:09:14.146 clat percentiles (usec): 00:09:14.146 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 169], 00:09:14.146 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:09:14.146 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 212], 95.00th=[ 239], 00:09:14.146 | 99.00th=[ 273], 99.50th=[ 293], 99.90th=[ 553], 99.95th=[ 660], 00:09:14.146 | 99.99th=[ 2343] 00:09:14.146 bw ( KiB/s): min=20008, max=21080, per=33.23%, avg=20497.33, stdev=390.23, samples=6 00:09:14.146 iops : min= 5002, max= 5270, avg=5124.33, stdev=97.56, samples=6 00:09:14.146 lat (usec) : 4=0.01%, 10=0.01%, 100=0.01%, 250=97.08%, 500=2.79% 00:09:14.146 lat (usec) : 750=0.07%, 1000=0.01% 00:09:14.146 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:09:14.146 cpu : usr=1.24%, sys=6.01%, ctx=16842, majf=0, minf=2 00:09:14.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 issued rwts: total=16832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.146 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=66297: Tue Nov 5 09:32:00 2024 00:09:14.146 read: IOPS=4869, BW=19.0MiB/s (19.9MB/s)(57.4MiB/3018msec) 00:09:14.146 slat (usec): min=11, max=3239, avg=19.16, stdev=27.74 00:09:14.146 clat (usec): min=65, max=2536, avg=184.48, stdev=45.26 00:09:14.146 lat (usec): min=160, max=3304, avg=203.63, stdev=54.51 00:09:14.146 clat percentiles (usec): 00:09:14.146 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:09:14.146 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:09:14.146 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 215], 00:09:14.146 | 99.00th=[ 404], 99.50th=[ 416], 99.90th=[ 441], 99.95th=[ 529], 00:09:14.146 | 99.99th=[ 1221] 00:09:14.146 bw ( KiB/s): min=17128, max=21144, per=31.60%, avg=19497.33, stdev=1562.58, samples=6 00:09:14.146 iops : min= 4282, max= 5286, avg=4874.33, stdev=390.65, samples=6 00:09:14.146 lat (usec) : 100=0.01%, 250=96.66%, 500=3.26%, 750=0.04%, 1000=0.01% 00:09:14.146 lat (msec) : 2=0.01%, 4=0.01% 00:09:14.146 cpu : usr=1.92%, sys=7.95%, ctx=14699, majf=0, minf=2 00:09:14.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.146 issued rwts: total=14696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.146 00:09:14.146 Run status group 0 (all jobs): 00:09:14.146 READ: bw=60.2MiB/s (63.2MB/s), 15.0MiB/s-19.4MiB/s (15.7MB/s-20.3MB/s), io=238MiB (249MB), run=3018-3945msec 00:09:14.146 00:09:14.146 Disk stats (read/write): 00:09:14.146 nvme0n1: ios=13991/0, merge=0/0, ticks=3371/0, in_queue=3371, util=95.19% 00:09:14.146 nvme0n2: ios=14658/0, merge=0/0, ticks=3552/0, in_queue=3552, util=95.46% 00:09:14.146 nvme0n3: ios=16739/0, merge=0/0, ticks=3102/0, in_queue=3102, util=96.54% 00:09:14.146 nvme0n4: ios=13922/0, merge=0/0, ticks=2613/0, in_queue=2613, util=96.73% 00:09:14.404 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.404 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:14.970 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.970 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:14.970 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:14.970 09:32:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:15.536 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:15.536 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 66250 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:15.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:15.794 nvmf hotplug test: fio failed as expected 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:15.794 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:16.053 rmmod nvme_tcp 00:09:16.053 rmmod nvme_fabrics 00:09:16.053 rmmod nvme_keyring 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 65865 ']' 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 65865 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 65865 ']' 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 65865 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65865 00:09:16.053 killing process with pid 65865 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65865' 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 65865 00:09:16.053 09:32:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 65865 00:09:16.311 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:16.311 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:16.311 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:16.311 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:09:16.311 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:09:16.311 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.312 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:09:16.571 00:09:16.571 real 0m20.012s 00:09:16.571 user 1m14.798s 00:09:16.571 sys 0m11.061s 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:16.571 ************************************ 00:09:16.571 END TEST nvmf_fio_target 00:09:16.571 ************************************ 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.571 ************************************ 00:09:16.571 START TEST nvmf_bdevio 00:09:16.571 ************************************ 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:16.571 * Looking for test storage... 00:09:16.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.571 --rc genhtml_branch_coverage=1 00:09:16.571 --rc genhtml_function_coverage=1 00:09:16.571 --rc genhtml_legend=1 00:09:16.571 --rc geninfo_all_blocks=1 00:09:16.571 --rc geninfo_unexecuted_blocks=1 00:09:16.571 00:09:16.571 ' 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.571 --rc genhtml_branch_coverage=1 00:09:16.571 --rc genhtml_function_coverage=1 00:09:16.571 --rc genhtml_legend=1 00:09:16.571 --rc geninfo_all_blocks=1 00:09:16.571 --rc geninfo_unexecuted_blocks=1 00:09:16.571 00:09:16.571 ' 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.571 --rc genhtml_branch_coverage=1 00:09:16.571 --rc genhtml_function_coverage=1 00:09:16.571 --rc genhtml_legend=1 00:09:16.571 --rc geninfo_all_blocks=1 00:09:16.571 --rc geninfo_unexecuted_blocks=1 00:09:16.571 00:09:16.571 ' 00:09:16.571 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:16.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.571 --rc genhtml_branch_coverage=1 00:09:16.571 --rc genhtml_function_coverage=1 00:09:16.571 --rc genhtml_legend=1 00:09:16.571 --rc geninfo_all_blocks=1 00:09:16.571 --rc geninfo_unexecuted_blocks=1 00:09:16.571 00:09:16.571 ' 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.572 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.830 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.831 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:16.831 Cannot find device "nvmf_init_br" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:16.831 Cannot find device "nvmf_init_br2" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:16.831 Cannot find device "nvmf_tgt_br" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:16.831 Cannot find device "nvmf_tgt_br2" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:16.831 Cannot find device "nvmf_init_br" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:16.831 Cannot find device "nvmf_init_br2" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:16.831 Cannot find device "nvmf_tgt_br" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:16.831 Cannot find device "nvmf_tgt_br2" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:16.831 Cannot find device "nvmf_br" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:16.831 Cannot find device "nvmf_init_if" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:16.831 Cannot find device "nvmf_init_if2" 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:16.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:16.831 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:16.831 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:17.090 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:17.090 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:09:17.090 00:09:17.090 --- 10.0.0.3 ping statistics --- 00:09:17.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.090 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:17.090 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:17.090 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:09:17.090 00:09:17.090 --- 10.0.0.4 ping statistics --- 00:09:17.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.090 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:17.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:09:17.090 00:09:17.090 --- 10.0.0.1 ping statistics --- 00:09:17.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.090 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:17.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:17.090 00:09:17.090 --- 10.0.0.2 ping statistics --- 00:09:17.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.090 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.090 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=66616 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 66616 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 66616 ']' 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:17.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:17.091 09:32:02 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:17.091 [2024-11-05 09:32:03.025699] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:09:17.091 [2024-11-05 09:32:03.025804] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.349 [2024-11-05 09:32:03.235976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:17.349 [2024-11-05 09:32:03.278974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.349 [2024-11-05 09:32:03.279030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.349 [2024-11-05 09:32:03.279042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.349 [2024-11-05 09:32:03.279050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.349 [2024-11-05 09:32:03.279058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.349 [2024-11-05 09:32:03.279813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:17.349 [2024-11-05 09:32:03.279895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:17.349 [2024-11-05 09:32:03.279971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:17.349 [2024-11-05 09:32:03.279974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.607 [2024-11-05 09:32:03.309977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.173 [2024-11-05 09:32:04.106187] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.173 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.431 Malloc0 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.431 [2024-11-05 09:32:04.158156] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:18.431 { 00:09:18.431 "params": { 00:09:18.431 "name": "Nvme$subsystem", 00:09:18.431 "trtype": "$TEST_TRANSPORT", 00:09:18.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:18.431 "adrfam": "ipv4", 00:09:18.431 "trsvcid": "$NVMF_PORT", 00:09:18.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:18.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:18.431 "hdgst": ${hdgst:-false}, 00:09:18.431 "ddgst": ${ddgst:-false} 00:09:18.431 }, 00:09:18.431 "method": "bdev_nvme_attach_controller" 00:09:18.431 } 00:09:18.431 EOF 00:09:18.431 )") 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:09:18.431 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:18.431 "params": { 00:09:18.431 "name": "Nvme1", 00:09:18.431 "trtype": "tcp", 00:09:18.431 "traddr": "10.0.0.3", 00:09:18.431 "adrfam": "ipv4", 00:09:18.431 "trsvcid": "4420", 00:09:18.431 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:18.431 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:18.431 "hdgst": false, 00:09:18.431 "ddgst": false 00:09:18.431 }, 00:09:18.431 "method": "bdev_nvme_attach_controller" 00:09:18.431 }' 00:09:18.431 [2024-11-05 09:32:04.211946] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:09:18.431 [2024-11-05 09:32:04.212041] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66652 ] 00:09:18.431 [2024-11-05 09:32:04.360155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:18.690 [2024-11-05 09:32:04.396457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.690 [2024-11-05 09:32:04.396605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.690 [2024-11-05 09:32:04.396610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.690 [2024-11-05 09:32:04.436148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.690 I/O targets: 00:09:18.690 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:18.690 00:09:18.690 00:09:18.690 CUnit - A unit testing framework for C - Version 2.1-3 00:09:18.690 http://cunit.sourceforge.net/ 00:09:18.690 00:09:18.690 00:09:18.690 Suite: bdevio tests on: Nvme1n1 00:09:18.690 Test: blockdev write read block ...passed 00:09:18.690 Test: blockdev write zeroes read block ...passed 00:09:18.690 Test: blockdev write zeroes read no split ...passed 00:09:18.690 Test: blockdev write zeroes read split ...passed 00:09:18.690 Test: blockdev write zeroes read split partial ...passed 00:09:18.690 Test: blockdev reset ...[2024-11-05 09:32:04.570087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:18.690 [2024-11-05 09:32:04.570203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x138e180 (9): Bad file descriptor 00:09:18.690 [2024-11-05 09:32:04.588512] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:18.690 passed 00:09:18.690 Test: blockdev write read 8 blocks ...passed 00:09:18.690 Test: blockdev write read size > 128k ...passed 00:09:18.690 Test: blockdev write read invalid size ...passed 00:09:18.690 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:18.690 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:18.690 Test: blockdev write read max offset ...passed 00:09:18.690 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:18.690 Test: blockdev writev readv 8 blocks ...passed 00:09:18.690 Test: blockdev writev readv 30 x 1block ...passed 00:09:18.690 Test: blockdev writev readv block ...passed 00:09:18.690 Test: blockdev writev readv size > 128k ...passed 00:09:18.690 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:18.690 Test: blockdev comparev and writev ...[2024-11-05 09:32:04.600537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.601092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.601679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.602146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.602541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.602567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.602586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.602597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.602908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.602931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.602950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.602960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.603248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.603271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.603289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:18.690 [2024-11-05 09:32:04.603299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:18.690 passed 00:09:18.690 Test: blockdev nvme passthru rw ...passed 00:09:18.690 Test: blockdev nvme passthru vendor specific ...[2024-11-05 09:32:04.604418] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.690 [2024-11-05 09:32:04.604672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.604817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.690 [2024-11-05 09:32:04.605012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.605225] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.690 [2024-11-05 09:32:04.605323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:18.690 [2024-11-05 09:32:04.605542] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:18.690 [2024-11-05 09:32:04.605578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:18.690 passed 00:09:18.690 Test: blockdev nvme admin passthru ...passed 00:09:18.690 Test: blockdev copy ...passed 00:09:18.690 00:09:18.690 Run Summary: Type Total Ran Passed Failed Inactive 00:09:18.690 suites 1 1 n/a 0 0 00:09:18.690 tests 23 23 23 0 0 00:09:18.690 asserts 152 152 152 0 n/a 00:09:18.690 00:09:18.690 Elapsed time = 0.155 seconds 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:18.949 rmmod nvme_tcp 00:09:18.949 rmmod nvme_fabrics 00:09:18.949 rmmod nvme_keyring 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:18.949 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 66616 ']' 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 66616 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 66616 ']' 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 66616 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66616 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:09:18.950 killing process with pid 66616 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66616' 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 66616 00:09:18.950 09:32:04 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 66616 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:19.208 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:09:19.467 ************************************ 00:09:19.467 00:09:19.467 real 0m2.963s 00:09:19.467 user 0m8.557s 00:09:19.467 sys 0m0.757s 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:19.467 END TEST nvmf_bdevio 00:09:19.467 ************************************ 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:19.467 00:09:19.467 real 2m30.034s 00:09:19.467 user 6m31.189s 00:09:19.467 sys 0m53.453s 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.467 ************************************ 00:09:19.467 END TEST nvmf_target_core 00:09:19.467 ************************************ 00:09:19.467 09:32:05 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:19.467 09:32:05 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:19.467 09:32:05 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:19.467 09:32:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:19.467 ************************************ 00:09:19.467 START TEST nvmf_target_extra 00:09:19.467 ************************************ 00:09:19.467 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:19.726 * Looking for test storage... 00:09:19.726 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:19.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.726 --rc genhtml_branch_coverage=1 00:09:19.726 --rc genhtml_function_coverage=1 00:09:19.726 --rc genhtml_legend=1 00:09:19.726 --rc geninfo_all_blocks=1 00:09:19.726 --rc geninfo_unexecuted_blocks=1 00:09:19.726 00:09:19.726 ' 00:09:19.726 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:19.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.726 --rc genhtml_branch_coverage=1 00:09:19.726 --rc genhtml_function_coverage=1 00:09:19.726 --rc genhtml_legend=1 00:09:19.726 --rc geninfo_all_blocks=1 00:09:19.727 --rc geninfo_unexecuted_blocks=1 00:09:19.727 00:09:19.727 ' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:19.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.727 --rc genhtml_branch_coverage=1 00:09:19.727 --rc genhtml_function_coverage=1 00:09:19.727 --rc genhtml_legend=1 00:09:19.727 --rc geninfo_all_blocks=1 00:09:19.727 --rc geninfo_unexecuted_blocks=1 00:09:19.727 00:09:19.727 ' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:19.727 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.727 --rc genhtml_branch_coverage=1 00:09:19.727 --rc genhtml_function_coverage=1 00:09:19.727 --rc genhtml_legend=1 00:09:19.727 --rc geninfo_all_blocks=1 00:09:19.727 --rc geninfo_unexecuted_blocks=1 00:09:19.727 00:09:19.727 ' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.727 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:19.727 ************************************ 00:09:19.727 START TEST nvmf_auth_target 00:09:19.727 ************************************ 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:09:19.727 * Looking for test storage... 00:09:19.727 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:09:19.727 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.987 --rc genhtml_branch_coverage=1 00:09:19.987 --rc genhtml_function_coverage=1 00:09:19.987 --rc genhtml_legend=1 00:09:19.987 --rc geninfo_all_blocks=1 00:09:19.987 --rc geninfo_unexecuted_blocks=1 00:09:19.987 00:09:19.987 ' 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.987 --rc genhtml_branch_coverage=1 00:09:19.987 --rc genhtml_function_coverage=1 00:09:19.987 --rc genhtml_legend=1 00:09:19.987 --rc geninfo_all_blocks=1 00:09:19.987 --rc geninfo_unexecuted_blocks=1 00:09:19.987 00:09:19.987 ' 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.987 --rc genhtml_branch_coverage=1 00:09:19.987 --rc genhtml_function_coverage=1 00:09:19.987 --rc genhtml_legend=1 00:09:19.987 --rc geninfo_all_blocks=1 00:09:19.987 --rc geninfo_unexecuted_blocks=1 00:09:19.987 00:09:19.987 ' 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:19.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:19.987 --rc genhtml_branch_coverage=1 00:09:19.987 --rc genhtml_function_coverage=1 00:09:19.987 --rc genhtml_legend=1 00:09:19.987 --rc geninfo_all_blocks=1 00:09:19.987 --rc geninfo_unexecuted_blocks=1 00:09:19.987 00:09:19.987 ' 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.987 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:19.988 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:19.988 Cannot find device "nvmf_init_br" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:19.988 Cannot find device "nvmf_init_br2" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:19.988 Cannot find device "nvmf_tgt_br" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:19.988 Cannot find device "nvmf_tgt_br2" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:19.988 Cannot find device "nvmf_init_br" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:19.988 Cannot find device "nvmf_init_br2" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:19.988 Cannot find device "nvmf_tgt_br" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:19.988 Cannot find device "nvmf_tgt_br2" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:19.988 Cannot find device "nvmf_br" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:19.988 Cannot find device "nvmf_init_if" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:19.988 Cannot find device "nvmf_init_if2" 00:09:19.988 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:09:19.989 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:19.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.989 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:09:19.989 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:19.989 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:19.989 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:09:19.989 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:19.989 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:19.989 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:20.247 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:20.247 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:20.247 09:32:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:20.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:20.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:09:20.247 00:09:20.247 --- 10.0.0.3 ping statistics --- 00:09:20.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.247 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:20.247 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:20.247 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:09:20.247 00:09:20.247 --- 10.0.0.4 ping statistics --- 00:09:20.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.247 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:20.247 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:20.247 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:20.247 00:09:20.247 --- 10.0.0.1 ping statistics --- 00:09:20.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.247 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:20.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:20.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:09:20.247 00:09:20.247 --- 10.0.0.2 ping statistics --- 00:09:20.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:20.247 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:20.247 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:20.248 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:20.248 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:20.248 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:20.248 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:20.248 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=66936 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 66936 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66936 ']' 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:20.506 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=66961 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:09:20.765 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a464d336787310db32eededb9d6dd9178f4b819a5d22192f 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Xy3 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a464d336787310db32eededb9d6dd9178f4b819a5d22192f 0 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a464d336787310db32eededb9d6dd9178f4b819a5d22192f 0 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a464d336787310db32eededb9d6dd9178f4b819a5d22192f 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Xy3 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Xy3 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Xy3 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f6267acd2337da231e92b5277af5e508e7fff75786f2ba1cc762b7979f5903f1 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.7cO 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f6267acd2337da231e92b5277af5e508e7fff75786f2ba1cc762b7979f5903f1 3 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f6267acd2337da231e92b5277af5e508e7fff75786f2ba1cc762b7979f5903f1 3 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f6267acd2337da231e92b5277af5e508e7fff75786f2ba1cc762b7979f5903f1 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:20.766 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.7cO 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.7cO 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.7cO 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f7f8d8458afafe38b7907ca90695c9b6 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.7iv 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f7f8d8458afafe38b7907ca90695c9b6 1 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f7f8d8458afafe38b7907ca90695c9b6 1 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f7f8d8458afafe38b7907ca90695c9b6 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.7iv 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.7iv 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.7iv 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=f1f342a18e5b122deabb5bb3c1248d889c75d12e84de9064 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.2Cl 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key f1f342a18e5b122deabb5bb3c1248d889c75d12e84de9064 2 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 f1f342a18e5b122deabb5bb3c1248d889c75d12e84de9064 2 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=f1f342a18e5b122deabb5bb3c1248d889c75d12e84de9064 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.2Cl 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.2Cl 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.2Cl 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=53daa5fb6cec8865921fd714822ed8f1944869e314fe1102 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.HoR 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 53daa5fb6cec8865921fd714822ed8f1944869e314fe1102 2 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 53daa5fb6cec8865921fd714822ed8f1944869e314fe1102 2 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=53daa5fb6cec8865921fd714822ed8f1944869e314fe1102 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.HoR 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.HoR 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.HoR 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=0124d090f30fa4ba323ef52df3806d9b 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.lnn 00:09:21.026 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 0124d090f30fa4ba323ef52df3806d9b 1 00:09:21.027 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 0124d090f30fa4ba323ef52df3806d9b 1 00:09:21.027 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:21.027 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:21.027 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=0124d090f30fa4ba323ef52df3806d9b 00:09:21.027 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:09:21.027 09:32:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.lnn 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.lnn 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.lnn 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=2c4fc0ae3db802561dc24f30e94daaff880a5dd1e92ad8cb3e8941a5fccf9670 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:09:21.285 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.cOp 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 2c4fc0ae3db802561dc24f30e94daaff880a5dd1e92ad8cb3e8941a5fccf9670 3 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 2c4fc0ae3db802561dc24f30e94daaff880a5dd1e92ad8cb3e8941a5fccf9670 3 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=2c4fc0ae3db802561dc24f30e94daaff880a5dd1e92ad8cb3e8941a5fccf9670 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.cOp 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.cOp 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.cOp 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 66936 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66936 ']' 00:09:21.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:21.286 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 66961 /var/tmp/host.sock 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 66961 ']' 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:09:21.544 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:21.544 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.803 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:21.803 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:09:21.803 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:09:21.803 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.803 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:21.803 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.061 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:22.061 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xy3 00:09:22.061 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.061 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.061 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.061 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Xy3 00:09:22.061 09:32:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Xy3 00:09:22.319 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.7cO ]] 00:09:22.319 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7cO 00:09:22.319 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.319 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.319 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.319 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7cO 00:09:22.319 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7cO 00:09:22.578 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:22.578 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7iv 00:09:22.578 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.578 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.578 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.578 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.7iv 00:09:22.578 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.7iv 00:09:22.836 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.2Cl ]] 00:09:22.836 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Cl 00:09:22.836 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.836 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.836 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.836 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Cl 00:09:22.836 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Cl 00:09:23.094 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:23.094 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HoR 00:09:23.094 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.094 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.094 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.094 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.HoR 00:09:23.094 09:32:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.HoR 00:09:23.352 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.lnn ]] 00:09:23.352 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lnn 00:09:23.352 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.352 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.352 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.352 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lnn 00:09:23.352 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lnn 00:09:23.917 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:09:23.917 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cOp 00:09:23.917 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.917 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:23.917 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.917 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.cOp 00:09:23.917 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.cOp 00:09:24.175 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:09:24.175 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:09:24.175 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:24.175 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:24.175 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:24.175 09:32:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.433 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:24.692 00:09:24.692 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:24.692 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:24.692 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:24.950 { 00:09:24.950 "cntlid": 1, 00:09:24.950 "qid": 0, 00:09:24.950 "state": "enabled", 00:09:24.950 "thread": "nvmf_tgt_poll_group_000", 00:09:24.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:24.950 "listen_address": { 00:09:24.950 "trtype": "TCP", 00:09:24.950 "adrfam": "IPv4", 00:09:24.950 "traddr": "10.0.0.3", 00:09:24.950 "trsvcid": "4420" 00:09:24.950 }, 00:09:24.950 "peer_address": { 00:09:24.950 "trtype": "TCP", 00:09:24.950 "adrfam": "IPv4", 00:09:24.950 "traddr": "10.0.0.1", 00:09:24.950 "trsvcid": "58022" 00:09:24.950 }, 00:09:24.950 "auth": { 00:09:24.950 "state": "completed", 00:09:24.950 "digest": "sha256", 00:09:24.950 "dhgroup": "null" 00:09:24.950 } 00:09:24.950 } 00:09:24.950 ]' 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:24.950 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:24.951 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:25.208 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:25.208 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:25.208 09:32:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:25.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:25.466 09:32:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:30.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.730 09:32:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:30.730 00:09:30.730 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:30.730 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:30.730 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:30.730 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:30.730 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:30.731 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.731 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:30.731 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.731 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:30.731 { 00:09:30.731 "cntlid": 3, 00:09:30.731 "qid": 0, 00:09:30.731 "state": "enabled", 00:09:30.731 "thread": "nvmf_tgt_poll_group_000", 00:09:30.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:30.731 "listen_address": { 00:09:30.731 "trtype": "TCP", 00:09:30.731 "adrfam": "IPv4", 00:09:30.731 "traddr": "10.0.0.3", 00:09:30.731 "trsvcid": "4420" 00:09:30.731 }, 00:09:30.731 "peer_address": { 00:09:30.731 "trtype": "TCP", 00:09:30.731 "adrfam": "IPv4", 00:09:30.731 "traddr": "10.0.0.1", 00:09:30.731 "trsvcid": "33708" 00:09:30.731 }, 00:09:30.731 "auth": { 00:09:30.731 "state": "completed", 00:09:30.731 "digest": "sha256", 00:09:30.731 "dhgroup": "null" 00:09:30.731 } 00:09:30.731 } 00:09:30.731 ]' 00:09:30.731 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:30.731 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:30.731 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:30.988 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:30.988 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:30.988 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:30.988 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:30.988 09:32:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:31.246 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:31.246 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:32.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:32.182 09:32:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.441 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:32.700 00:09:32.700 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:32.700 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:32.700 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:32.958 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:32.958 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:32.958 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.958 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:32.958 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.958 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:32.958 { 00:09:32.958 "cntlid": 5, 00:09:32.958 "qid": 0, 00:09:32.958 "state": "enabled", 00:09:32.958 "thread": "nvmf_tgt_poll_group_000", 00:09:32.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:32.958 "listen_address": { 00:09:32.958 "trtype": "TCP", 00:09:32.958 "adrfam": "IPv4", 00:09:32.958 "traddr": "10.0.0.3", 00:09:32.958 "trsvcid": "4420" 00:09:32.958 }, 00:09:32.958 "peer_address": { 00:09:32.958 "trtype": "TCP", 00:09:32.958 "adrfam": "IPv4", 00:09:32.958 "traddr": "10.0.0.1", 00:09:32.958 "trsvcid": "33738" 00:09:32.958 }, 00:09:32.958 "auth": { 00:09:32.958 "state": "completed", 00:09:32.958 "digest": "sha256", 00:09:32.958 "dhgroup": "null" 00:09:32.958 } 00:09:32.958 } 00:09:32.958 ]' 00:09:32.958 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:33.217 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:33.217 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:33.217 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:33.217 09:32:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:33.217 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:33.217 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:33.217 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:33.475 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:09:33.475 09:32:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:34.410 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.410 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:34.669 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.669 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:34.669 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:34.669 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:34.927 00:09:34.927 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:34.927 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:34.927 09:32:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:35.185 { 00:09:35.185 "cntlid": 7, 00:09:35.185 "qid": 0, 00:09:35.185 "state": "enabled", 00:09:35.185 "thread": "nvmf_tgt_poll_group_000", 00:09:35.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:35.185 "listen_address": { 00:09:35.185 "trtype": "TCP", 00:09:35.185 "adrfam": "IPv4", 00:09:35.185 "traddr": "10.0.0.3", 00:09:35.185 "trsvcid": "4420" 00:09:35.185 }, 00:09:35.185 "peer_address": { 00:09:35.185 "trtype": "TCP", 00:09:35.185 "adrfam": "IPv4", 00:09:35.185 "traddr": "10.0.0.1", 00:09:35.185 "trsvcid": "33760" 00:09:35.185 }, 00:09:35.185 "auth": { 00:09:35.185 "state": "completed", 00:09:35.185 "digest": "sha256", 00:09:35.185 "dhgroup": "null" 00:09:35.185 } 00:09:35.185 } 00:09:35.185 ]' 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:35.185 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:35.444 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:09:35.444 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:35.444 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:35.444 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:35.444 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:35.703 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:09:35.703 09:32:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:36.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:36.269 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:36.837 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:37.096 00:09:37.096 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:37.096 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:37.096 09:32:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:37.354 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:37.354 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:37.354 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.354 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:37.354 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.354 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:37.354 { 00:09:37.354 "cntlid": 9, 00:09:37.355 "qid": 0, 00:09:37.355 "state": "enabled", 00:09:37.355 "thread": "nvmf_tgt_poll_group_000", 00:09:37.355 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:37.355 "listen_address": { 00:09:37.355 "trtype": "TCP", 00:09:37.355 "adrfam": "IPv4", 00:09:37.355 "traddr": "10.0.0.3", 00:09:37.355 "trsvcid": "4420" 00:09:37.355 }, 00:09:37.355 "peer_address": { 00:09:37.355 "trtype": "TCP", 00:09:37.355 "adrfam": "IPv4", 00:09:37.355 "traddr": "10.0.0.1", 00:09:37.355 "trsvcid": "51372" 00:09:37.355 }, 00:09:37.355 "auth": { 00:09:37.355 "state": "completed", 00:09:37.355 "digest": "sha256", 00:09:37.355 "dhgroup": "ffdhe2048" 00:09:37.355 } 00:09:37.355 } 00:09:37.355 ]' 00:09:37.355 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:37.355 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:37.355 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:37.613 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:37.613 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:37.613 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:37.613 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:37.613 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:37.871 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:37.872 09:32:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:38.513 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:38.513 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:38.772 09:32:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:39.338 00:09:39.338 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:39.338 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:39.338 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:39.596 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:39.596 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:39.596 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.596 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:39.596 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.596 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:39.596 { 00:09:39.596 "cntlid": 11, 00:09:39.596 "qid": 0, 00:09:39.596 "state": "enabled", 00:09:39.596 "thread": "nvmf_tgt_poll_group_000", 00:09:39.596 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:39.596 "listen_address": { 00:09:39.596 "trtype": "TCP", 00:09:39.597 "adrfam": "IPv4", 00:09:39.597 "traddr": "10.0.0.3", 00:09:39.597 "trsvcid": "4420" 00:09:39.597 }, 00:09:39.597 "peer_address": { 00:09:39.597 "trtype": "TCP", 00:09:39.597 "adrfam": "IPv4", 00:09:39.597 "traddr": "10.0.0.1", 00:09:39.597 "trsvcid": "51408" 00:09:39.597 }, 00:09:39.597 "auth": { 00:09:39.597 "state": "completed", 00:09:39.597 "digest": "sha256", 00:09:39.597 "dhgroup": "ffdhe2048" 00:09:39.597 } 00:09:39.597 } 00:09:39.597 ]' 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:39.597 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:40.163 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:40.163 09:32:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:40.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:40.729 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:40.988 09:32:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:41.247 00:09:41.506 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:41.506 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:41.506 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:41.764 { 00:09:41.764 "cntlid": 13, 00:09:41.764 "qid": 0, 00:09:41.764 "state": "enabled", 00:09:41.764 "thread": "nvmf_tgt_poll_group_000", 00:09:41.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:41.764 "listen_address": { 00:09:41.764 "trtype": "TCP", 00:09:41.764 "adrfam": "IPv4", 00:09:41.764 "traddr": "10.0.0.3", 00:09:41.764 "trsvcid": "4420" 00:09:41.764 }, 00:09:41.764 "peer_address": { 00:09:41.764 "trtype": "TCP", 00:09:41.764 "adrfam": "IPv4", 00:09:41.764 "traddr": "10.0.0.1", 00:09:41.764 "trsvcid": "51436" 00:09:41.764 }, 00:09:41.764 "auth": { 00:09:41.764 "state": "completed", 00:09:41.764 "digest": "sha256", 00:09:41.764 "dhgroup": "ffdhe2048" 00:09:41.764 } 00:09:41.764 } 00:09:41.764 ]' 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:41.764 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:42.022 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:09:42.022 09:32:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:42.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:42.955 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:43.213 09:32:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:43.472 00:09:43.472 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:43.472 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:43.472 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:43.731 { 00:09:43.731 "cntlid": 15, 00:09:43.731 "qid": 0, 00:09:43.731 "state": "enabled", 00:09:43.731 "thread": "nvmf_tgt_poll_group_000", 00:09:43.731 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:43.731 "listen_address": { 00:09:43.731 "trtype": "TCP", 00:09:43.731 "adrfam": "IPv4", 00:09:43.731 "traddr": "10.0.0.3", 00:09:43.731 "trsvcid": "4420" 00:09:43.731 }, 00:09:43.731 "peer_address": { 00:09:43.731 "trtype": "TCP", 00:09:43.731 "adrfam": "IPv4", 00:09:43.731 "traddr": "10.0.0.1", 00:09:43.731 "trsvcid": "51464" 00:09:43.731 }, 00:09:43.731 "auth": { 00:09:43.731 "state": "completed", 00:09:43.731 "digest": "sha256", 00:09:43.731 "dhgroup": "ffdhe2048" 00:09:43.731 } 00:09:43.731 } 00:09:43.731 ]' 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:43.731 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:43.989 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:09:43.989 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:43.989 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:43.989 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:43.989 09:32:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:44.247 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:09:44.247 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:44.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:44.813 09:32:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.380 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:45.639 00:09:45.639 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:45.639 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:45.639 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:45.897 { 00:09:45.897 "cntlid": 17, 00:09:45.897 "qid": 0, 00:09:45.897 "state": "enabled", 00:09:45.897 "thread": "nvmf_tgt_poll_group_000", 00:09:45.897 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:45.897 "listen_address": { 00:09:45.897 "trtype": "TCP", 00:09:45.897 "adrfam": "IPv4", 00:09:45.897 "traddr": "10.0.0.3", 00:09:45.897 "trsvcid": "4420" 00:09:45.897 }, 00:09:45.897 "peer_address": { 00:09:45.897 "trtype": "TCP", 00:09:45.897 "adrfam": "IPv4", 00:09:45.897 "traddr": "10.0.0.1", 00:09:45.897 "trsvcid": "50812" 00:09:45.897 }, 00:09:45.897 "auth": { 00:09:45.897 "state": "completed", 00:09:45.897 "digest": "sha256", 00:09:45.897 "dhgroup": "ffdhe3072" 00:09:45.897 } 00:09:45.897 } 00:09:45.897 ]' 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:45.897 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:46.155 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:46.155 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:46.155 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:46.155 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:46.155 09:32:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:46.413 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:46.413 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:46.980 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:46.980 09:32:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.547 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:47.855 00:09:47.855 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:47.855 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:47.855 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:48.127 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:48.127 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:48.127 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.127 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:48.127 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.127 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:48.127 { 00:09:48.127 "cntlid": 19, 00:09:48.127 "qid": 0, 00:09:48.127 "state": "enabled", 00:09:48.127 "thread": "nvmf_tgt_poll_group_000", 00:09:48.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:48.127 "listen_address": { 00:09:48.127 "trtype": "TCP", 00:09:48.127 "adrfam": "IPv4", 00:09:48.127 "traddr": "10.0.0.3", 00:09:48.127 "trsvcid": "4420" 00:09:48.127 }, 00:09:48.127 "peer_address": { 00:09:48.127 "trtype": "TCP", 00:09:48.127 "adrfam": "IPv4", 00:09:48.127 "traddr": "10.0.0.1", 00:09:48.127 "trsvcid": "50842" 00:09:48.127 }, 00:09:48.127 "auth": { 00:09:48.127 "state": "completed", 00:09:48.127 "digest": "sha256", 00:09:48.127 "dhgroup": "ffdhe3072" 00:09:48.127 } 00:09:48.127 } 00:09:48.127 ]' 00:09:48.127 09:32:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:48.127 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:48.127 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:48.127 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:48.127 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:48.385 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:48.385 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:48.385 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:48.643 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:48.643 09:32:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:49.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:49.209 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:49.467 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:09:49.467 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:49.467 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:49.467 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:49.467 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:49.467 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:49.725 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.725 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.725 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:49.725 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.725 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.725 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.725 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:49.983 00:09:49.983 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:49.983 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:49.983 09:32:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:50.241 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:50.241 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:50.241 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.241 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:50.241 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.241 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:50.241 { 00:09:50.241 "cntlid": 21, 00:09:50.241 "qid": 0, 00:09:50.241 "state": "enabled", 00:09:50.241 "thread": "nvmf_tgt_poll_group_000", 00:09:50.242 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:50.242 "listen_address": { 00:09:50.242 "trtype": "TCP", 00:09:50.242 "adrfam": "IPv4", 00:09:50.242 "traddr": "10.0.0.3", 00:09:50.242 "trsvcid": "4420" 00:09:50.242 }, 00:09:50.242 "peer_address": { 00:09:50.242 "trtype": "TCP", 00:09:50.242 "adrfam": "IPv4", 00:09:50.242 "traddr": "10.0.0.1", 00:09:50.242 "trsvcid": "50872" 00:09:50.242 }, 00:09:50.242 "auth": { 00:09:50.242 "state": "completed", 00:09:50.242 "digest": "sha256", 00:09:50.242 "dhgroup": "ffdhe3072" 00:09:50.242 } 00:09:50.242 } 00:09:50.242 ]' 00:09:50.242 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:50.242 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:50.242 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:50.500 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:50.500 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:50.500 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:50.500 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:50.500 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:50.758 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:09:50.758 09:32:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:51.690 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.690 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:09:51.691 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:51.691 09:32:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:09:52.256 00:09:52.256 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:52.256 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:52.256 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:52.515 { 00:09:52.515 "cntlid": 23, 00:09:52.515 "qid": 0, 00:09:52.515 "state": "enabled", 00:09:52.515 "thread": "nvmf_tgt_poll_group_000", 00:09:52.515 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:52.515 "listen_address": { 00:09:52.515 "trtype": "TCP", 00:09:52.515 "adrfam": "IPv4", 00:09:52.515 "traddr": "10.0.0.3", 00:09:52.515 "trsvcid": "4420" 00:09:52.515 }, 00:09:52.515 "peer_address": { 00:09:52.515 "trtype": "TCP", 00:09:52.515 "adrfam": "IPv4", 00:09:52.515 "traddr": "10.0.0.1", 00:09:52.515 "trsvcid": "50904" 00:09:52.515 }, 00:09:52.515 "auth": { 00:09:52.515 "state": "completed", 00:09:52.515 "digest": "sha256", 00:09:52.515 "dhgroup": "ffdhe3072" 00:09:52.515 } 00:09:52.515 } 00:09:52.515 ]' 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:09:52.515 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:52.773 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:52.773 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:52.773 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:53.031 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:09:53.031 09:32:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:53.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:53.597 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:53.856 09:32:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:09:54.422 00:09:54.422 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:54.422 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:54.422 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:54.681 { 00:09:54.681 "cntlid": 25, 00:09:54.681 "qid": 0, 00:09:54.681 "state": "enabled", 00:09:54.681 "thread": "nvmf_tgt_poll_group_000", 00:09:54.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:54.681 "listen_address": { 00:09:54.681 "trtype": "TCP", 00:09:54.681 "adrfam": "IPv4", 00:09:54.681 "traddr": "10.0.0.3", 00:09:54.681 "trsvcid": "4420" 00:09:54.681 }, 00:09:54.681 "peer_address": { 00:09:54.681 "trtype": "TCP", 00:09:54.681 "adrfam": "IPv4", 00:09:54.681 "traddr": "10.0.0.1", 00:09:54.681 "trsvcid": "50922" 00:09:54.681 }, 00:09:54.681 "auth": { 00:09:54.681 "state": "completed", 00:09:54.681 "digest": "sha256", 00:09:54.681 "dhgroup": "ffdhe4096" 00:09:54.681 } 00:09:54.681 } 00:09:54.681 ]' 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:54.681 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:54.940 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:54.940 09:32:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:55.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:55.876 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:56.135 09:32:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:09:56.394 00:09:56.394 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:56.394 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:56.394 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:56.963 { 00:09:56.963 "cntlid": 27, 00:09:56.963 "qid": 0, 00:09:56.963 "state": "enabled", 00:09:56.963 "thread": "nvmf_tgt_poll_group_000", 00:09:56.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:56.963 "listen_address": { 00:09:56.963 "trtype": "TCP", 00:09:56.963 "adrfam": "IPv4", 00:09:56.963 "traddr": "10.0.0.3", 00:09:56.963 "trsvcid": "4420" 00:09:56.963 }, 00:09:56.963 "peer_address": { 00:09:56.963 "trtype": "TCP", 00:09:56.963 "adrfam": "IPv4", 00:09:56.963 "traddr": "10.0.0.1", 00:09:56.963 "trsvcid": "51090" 00:09:56.963 }, 00:09:56.963 "auth": { 00:09:56.963 "state": "completed", 00:09:56.963 "digest": "sha256", 00:09:56.963 "dhgroup": "ffdhe4096" 00:09:56.963 } 00:09:56.963 } 00:09:56.963 ]' 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:56.963 09:32:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:57.222 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:57.222 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:09:58.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:58.212 09:32:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.212 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:09:58.781 00:09:58.781 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:09:58.781 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:09:58.781 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:09:59.040 { 00:09:59.040 "cntlid": 29, 00:09:59.040 "qid": 0, 00:09:59.040 "state": "enabled", 00:09:59.040 "thread": "nvmf_tgt_poll_group_000", 00:09:59.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:09:59.040 "listen_address": { 00:09:59.040 "trtype": "TCP", 00:09:59.040 "adrfam": "IPv4", 00:09:59.040 "traddr": "10.0.0.3", 00:09:59.040 "trsvcid": "4420" 00:09:59.040 }, 00:09:59.040 "peer_address": { 00:09:59.040 "trtype": "TCP", 00:09:59.040 "adrfam": "IPv4", 00:09:59.040 "traddr": "10.0.0.1", 00:09:59.040 "trsvcid": "51108" 00:09:59.040 }, 00:09:59.040 "auth": { 00:09:59.040 "state": "completed", 00:09:59.040 "digest": "sha256", 00:09:59.040 "dhgroup": "ffdhe4096" 00:09:59.040 } 00:09:59.040 } 00:09:59.040 ]' 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:09:59.040 09:32:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:09:59.299 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:09:59.299 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:09:59.299 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:09:59.299 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:09:59.299 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:09:59.558 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:09:59.558 09:32:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:00.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:00.126 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:00.692 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:00.951 00:10:00.951 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:00.951 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:00.951 09:32:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:01.209 { 00:10:01.209 "cntlid": 31, 00:10:01.209 "qid": 0, 00:10:01.209 "state": "enabled", 00:10:01.209 "thread": "nvmf_tgt_poll_group_000", 00:10:01.209 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:01.209 "listen_address": { 00:10:01.209 "trtype": "TCP", 00:10:01.209 "adrfam": "IPv4", 00:10:01.209 "traddr": "10.0.0.3", 00:10:01.209 "trsvcid": "4420" 00:10:01.209 }, 00:10:01.209 "peer_address": { 00:10:01.209 "trtype": "TCP", 00:10:01.209 "adrfam": "IPv4", 00:10:01.209 "traddr": "10.0.0.1", 00:10:01.209 "trsvcid": "51134" 00:10:01.209 }, 00:10:01.209 "auth": { 00:10:01.209 "state": "completed", 00:10:01.209 "digest": "sha256", 00:10:01.209 "dhgroup": "ffdhe4096" 00:10:01.209 } 00:10:01.209 } 00:10:01.209 ]' 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:01.209 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:01.467 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:01.467 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:01.467 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:01.725 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:01.725 09:32:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:02.291 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:02.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:02.291 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:02.291 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.292 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.292 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.292 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:02.292 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:02.292 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:02.292 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:02.551 09:32:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:03.118 00:10:03.118 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:03.118 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:03.118 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:03.376 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:03.376 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:03.376 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:03.376 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:03.376 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:03.376 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:03.376 { 00:10:03.376 "cntlid": 33, 00:10:03.376 "qid": 0, 00:10:03.376 "state": "enabled", 00:10:03.376 "thread": "nvmf_tgt_poll_group_000", 00:10:03.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:03.376 "listen_address": { 00:10:03.376 "trtype": "TCP", 00:10:03.376 "adrfam": "IPv4", 00:10:03.376 "traddr": "10.0.0.3", 00:10:03.376 "trsvcid": "4420" 00:10:03.376 }, 00:10:03.376 "peer_address": { 00:10:03.376 "trtype": "TCP", 00:10:03.376 "adrfam": "IPv4", 00:10:03.376 "traddr": "10.0.0.1", 00:10:03.376 "trsvcid": "51176" 00:10:03.376 }, 00:10:03.376 "auth": { 00:10:03.376 "state": "completed", 00:10:03.376 "digest": "sha256", 00:10:03.376 "dhgroup": "ffdhe6144" 00:10:03.376 } 00:10:03.376 } 00:10:03.376 ]' 00:10:03.376 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:03.634 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:03.634 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:03.634 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:03.634 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:03.634 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:03.634 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:03.634 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:03.891 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:03.891 09:32:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:04.457 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:04.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:04.715 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:04.715 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.715 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.715 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.715 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:04.715 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:04.715 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:04.973 09:32:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:05.540 00:10:05.540 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:05.540 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:05.540 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:05.797 { 00:10:05.797 "cntlid": 35, 00:10:05.797 "qid": 0, 00:10:05.797 "state": "enabled", 00:10:05.797 "thread": "nvmf_tgt_poll_group_000", 00:10:05.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:05.797 "listen_address": { 00:10:05.797 "trtype": "TCP", 00:10:05.797 "adrfam": "IPv4", 00:10:05.797 "traddr": "10.0.0.3", 00:10:05.797 "trsvcid": "4420" 00:10:05.797 }, 00:10:05.797 "peer_address": { 00:10:05.797 "trtype": "TCP", 00:10:05.797 "adrfam": "IPv4", 00:10:05.797 "traddr": "10.0.0.1", 00:10:05.797 "trsvcid": "51200" 00:10:05.797 }, 00:10:05.797 "auth": { 00:10:05.797 "state": "completed", 00:10:05.797 "digest": "sha256", 00:10:05.797 "dhgroup": "ffdhe6144" 00:10:05.797 } 00:10:05.797 } 00:10:05.797 ]' 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:05.797 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:05.798 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:05.798 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:05.798 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:05.798 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:05.798 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:05.798 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:06.056 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:06.056 09:32:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:06.623 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:06.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:06.623 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:06.623 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.623 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.881 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.140 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.140 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.140 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.140 09:32:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:07.726 00:10:07.726 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:07.726 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:07.726 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:07.985 { 00:10:07.985 "cntlid": 37, 00:10:07.985 "qid": 0, 00:10:07.985 "state": "enabled", 00:10:07.985 "thread": "nvmf_tgt_poll_group_000", 00:10:07.985 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:07.985 "listen_address": { 00:10:07.985 "trtype": "TCP", 00:10:07.985 "adrfam": "IPv4", 00:10:07.985 "traddr": "10.0.0.3", 00:10:07.985 "trsvcid": "4420" 00:10:07.985 }, 00:10:07.985 "peer_address": { 00:10:07.985 "trtype": "TCP", 00:10:07.985 "adrfam": "IPv4", 00:10:07.985 "traddr": "10.0.0.1", 00:10:07.985 "trsvcid": "34768" 00:10:07.985 }, 00:10:07.985 "auth": { 00:10:07.985 "state": "completed", 00:10:07.985 "digest": "sha256", 00:10:07.985 "dhgroup": "ffdhe6144" 00:10:07.985 } 00:10:07.985 } 00:10:07.985 ]' 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:07.985 09:32:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:08.243 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:08.244 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:09.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:09.179 09:32:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:09.437 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:10.005 00:10:10.005 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:10.005 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:10.005 09:32:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:10.264 { 00:10:10.264 "cntlid": 39, 00:10:10.264 "qid": 0, 00:10:10.264 "state": "enabled", 00:10:10.264 "thread": "nvmf_tgt_poll_group_000", 00:10:10.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:10.264 "listen_address": { 00:10:10.264 "trtype": "TCP", 00:10:10.264 "adrfam": "IPv4", 00:10:10.264 "traddr": "10.0.0.3", 00:10:10.264 "trsvcid": "4420" 00:10:10.264 }, 00:10:10.264 "peer_address": { 00:10:10.264 "trtype": "TCP", 00:10:10.264 "adrfam": "IPv4", 00:10:10.264 "traddr": "10.0.0.1", 00:10:10.264 "trsvcid": "34800" 00:10:10.264 }, 00:10:10.264 "auth": { 00:10:10.264 "state": "completed", 00:10:10.264 "digest": "sha256", 00:10:10.264 "dhgroup": "ffdhe6144" 00:10:10.264 } 00:10:10.264 } 00:10:10.264 ]' 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:10.264 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:10.265 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:10.265 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:10.523 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:10.523 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:10.523 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:10.781 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:10.781 09:32:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:11.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:11.348 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:11.916 09:32:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:12.484 00:10:12.484 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:12.484 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:12.484 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:12.743 { 00:10:12.743 "cntlid": 41, 00:10:12.743 "qid": 0, 00:10:12.743 "state": "enabled", 00:10:12.743 "thread": "nvmf_tgt_poll_group_000", 00:10:12.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:12.743 "listen_address": { 00:10:12.743 "trtype": "TCP", 00:10:12.743 "adrfam": "IPv4", 00:10:12.743 "traddr": "10.0.0.3", 00:10:12.743 "trsvcid": "4420" 00:10:12.743 }, 00:10:12.743 "peer_address": { 00:10:12.743 "trtype": "TCP", 00:10:12.743 "adrfam": "IPv4", 00:10:12.743 "traddr": "10.0.0.1", 00:10:12.743 "trsvcid": "34824" 00:10:12.743 }, 00:10:12.743 "auth": { 00:10:12.743 "state": "completed", 00:10:12.743 "digest": "sha256", 00:10:12.743 "dhgroup": "ffdhe8192" 00:10:12.743 } 00:10:12.743 } 00:10:12.743 ]' 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:12.743 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:13.002 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:13.002 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:13.002 09:32:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:13.260 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:13.260 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:13.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:13.827 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.085 09:32:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:14.652 00:10:14.911 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:14.911 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:14.911 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:15.169 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:15.169 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:15.169 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.169 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:15.170 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.170 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:15.170 { 00:10:15.170 "cntlid": 43, 00:10:15.170 "qid": 0, 00:10:15.170 "state": "enabled", 00:10:15.170 "thread": "nvmf_tgt_poll_group_000", 00:10:15.170 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:15.170 "listen_address": { 00:10:15.170 "trtype": "TCP", 00:10:15.170 "adrfam": "IPv4", 00:10:15.170 "traddr": "10.0.0.3", 00:10:15.170 "trsvcid": "4420" 00:10:15.170 }, 00:10:15.170 "peer_address": { 00:10:15.170 "trtype": "TCP", 00:10:15.170 "adrfam": "IPv4", 00:10:15.170 "traddr": "10.0.0.1", 00:10:15.170 "trsvcid": "34850" 00:10:15.170 }, 00:10:15.170 "auth": { 00:10:15.170 "state": "completed", 00:10:15.170 "digest": "sha256", 00:10:15.170 "dhgroup": "ffdhe8192" 00:10:15.170 } 00:10:15.170 } 00:10:15.170 ]' 00:10:15.170 09:33:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:15.170 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:15.170 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:15.170 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:15.170 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:15.170 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:15.170 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:15.170 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:15.737 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:15.737 09:33:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:16.303 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:16.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:16.303 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:16.303 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.304 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.304 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.304 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:16.304 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:16.304 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:16.563 09:33:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:17.499 00:10:17.499 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:17.499 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:17.499 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:17.499 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:17.499 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:17.499 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.499 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:17.757 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.757 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:17.757 { 00:10:17.757 "cntlid": 45, 00:10:17.757 "qid": 0, 00:10:17.757 "state": "enabled", 00:10:17.757 "thread": "nvmf_tgt_poll_group_000", 00:10:17.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:17.757 "listen_address": { 00:10:17.757 "trtype": "TCP", 00:10:17.757 "adrfam": "IPv4", 00:10:17.757 "traddr": "10.0.0.3", 00:10:17.757 "trsvcid": "4420" 00:10:17.757 }, 00:10:17.757 "peer_address": { 00:10:17.757 "trtype": "TCP", 00:10:17.757 "adrfam": "IPv4", 00:10:17.757 "traddr": "10.0.0.1", 00:10:17.757 "trsvcid": "40842" 00:10:17.757 }, 00:10:17.757 "auth": { 00:10:17.757 "state": "completed", 00:10:17.757 "digest": "sha256", 00:10:17.757 "dhgroup": "ffdhe8192" 00:10:17.757 } 00:10:17.757 } 00:10:17.757 ]' 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:17.758 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:18.016 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:18.016 09:33:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:18.951 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:18.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:18.951 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:18.952 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.952 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.952 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.952 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:18.952 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:18.952 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:19.210 09:33:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:19.210 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:19.210 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:19.211 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:19.211 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:19.832 00:10:19.832 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:19.832 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:19.832 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:20.106 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:20.106 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:20.106 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.106 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:20.106 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.106 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:20.106 { 00:10:20.106 "cntlid": 47, 00:10:20.106 "qid": 0, 00:10:20.106 "state": "enabled", 00:10:20.106 "thread": "nvmf_tgt_poll_group_000", 00:10:20.106 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:20.106 "listen_address": { 00:10:20.106 "trtype": "TCP", 00:10:20.106 "adrfam": "IPv4", 00:10:20.106 "traddr": "10.0.0.3", 00:10:20.106 "trsvcid": "4420" 00:10:20.106 }, 00:10:20.106 "peer_address": { 00:10:20.106 "trtype": "TCP", 00:10:20.106 "adrfam": "IPv4", 00:10:20.106 "traddr": "10.0.0.1", 00:10:20.106 "trsvcid": "40864" 00:10:20.106 }, 00:10:20.106 "auth": { 00:10:20.106 "state": "completed", 00:10:20.106 "digest": "sha256", 00:10:20.106 "dhgroup": "ffdhe8192" 00:10:20.106 } 00:10:20.106 } 00:10:20.106 ]' 00:10:20.106 09:33:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:20.106 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:10:20.106 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:20.106 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:10:20.106 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:20.365 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:20.365 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:20.365 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:20.624 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:20.624 09:33:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:21.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:21.191 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:21.758 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:22.017 00:10:22.017 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:22.017 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:22.017 09:33:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:22.276 { 00:10:22.276 "cntlid": 49, 00:10:22.276 "qid": 0, 00:10:22.276 "state": "enabled", 00:10:22.276 "thread": "nvmf_tgt_poll_group_000", 00:10:22.276 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:22.276 "listen_address": { 00:10:22.276 "trtype": "TCP", 00:10:22.276 "adrfam": "IPv4", 00:10:22.276 "traddr": "10.0.0.3", 00:10:22.276 "trsvcid": "4420" 00:10:22.276 }, 00:10:22.276 "peer_address": { 00:10:22.276 "trtype": "TCP", 00:10:22.276 "adrfam": "IPv4", 00:10:22.276 "traddr": "10.0.0.1", 00:10:22.276 "trsvcid": "40896" 00:10:22.276 }, 00:10:22.276 "auth": { 00:10:22.276 "state": "completed", 00:10:22.276 "digest": "sha384", 00:10:22.276 "dhgroup": "null" 00:10:22.276 } 00:10:22.276 } 00:10:22.276 ]' 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:22.276 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:22.845 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:22.845 09:33:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:23.412 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:23.412 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:23.412 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:23.412 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.412 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.413 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.413 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:23.413 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:23.413 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.672 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:23.931 00:10:23.931 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:23.931 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:23.931 09:33:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:24.190 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:24.190 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:24.190 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.190 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:24.190 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.190 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:24.190 { 00:10:24.190 "cntlid": 51, 00:10:24.190 "qid": 0, 00:10:24.190 "state": "enabled", 00:10:24.190 "thread": "nvmf_tgt_poll_group_000", 00:10:24.190 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:24.190 "listen_address": { 00:10:24.190 "trtype": "TCP", 00:10:24.190 "adrfam": "IPv4", 00:10:24.190 "traddr": "10.0.0.3", 00:10:24.190 "trsvcid": "4420" 00:10:24.190 }, 00:10:24.190 "peer_address": { 00:10:24.190 "trtype": "TCP", 00:10:24.190 "adrfam": "IPv4", 00:10:24.190 "traddr": "10.0.0.1", 00:10:24.190 "trsvcid": "40914" 00:10:24.190 }, 00:10:24.190 "auth": { 00:10:24.190 "state": "completed", 00:10:24.190 "digest": "sha384", 00:10:24.190 "dhgroup": "null" 00:10:24.190 } 00:10:24.190 } 00:10:24.190 ]' 00:10:24.190 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:24.449 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:24.449 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:24.449 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:24.449 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:24.449 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:24.449 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:24.449 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:24.708 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:24.708 09:33:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:25.276 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:25.276 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:25.535 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:26.103 00:10:26.103 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:26.103 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:26.103 09:33:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:26.362 { 00:10:26.362 "cntlid": 53, 00:10:26.362 "qid": 0, 00:10:26.362 "state": "enabled", 00:10:26.362 "thread": "nvmf_tgt_poll_group_000", 00:10:26.362 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:26.362 "listen_address": { 00:10:26.362 "trtype": "TCP", 00:10:26.362 "adrfam": "IPv4", 00:10:26.362 "traddr": "10.0.0.3", 00:10:26.362 "trsvcid": "4420" 00:10:26.362 }, 00:10:26.362 "peer_address": { 00:10:26.362 "trtype": "TCP", 00:10:26.362 "adrfam": "IPv4", 00:10:26.362 "traddr": "10.0.0.1", 00:10:26.362 "trsvcid": "42554" 00:10:26.362 }, 00:10:26.362 "auth": { 00:10:26.362 "state": "completed", 00:10:26.362 "digest": "sha384", 00:10:26.362 "dhgroup": "null" 00:10:26.362 } 00:10:26.362 } 00:10:26.362 ]' 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:26.362 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:26.930 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:26.930 09:33:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:27.496 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:27.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:27.496 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:27.496 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.496 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.497 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.497 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:27.497 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:27.497 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:27.756 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:28.015 00:10:28.015 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:28.015 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:28.015 09:33:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:28.274 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:28.274 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:28.274 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.274 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:28.274 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.274 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:28.274 { 00:10:28.274 "cntlid": 55, 00:10:28.274 "qid": 0, 00:10:28.274 "state": "enabled", 00:10:28.274 "thread": "nvmf_tgt_poll_group_000", 00:10:28.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:28.274 "listen_address": { 00:10:28.274 "trtype": "TCP", 00:10:28.274 "adrfam": "IPv4", 00:10:28.274 "traddr": "10.0.0.3", 00:10:28.274 "trsvcid": "4420" 00:10:28.274 }, 00:10:28.274 "peer_address": { 00:10:28.274 "trtype": "TCP", 00:10:28.274 "adrfam": "IPv4", 00:10:28.274 "traddr": "10.0.0.1", 00:10:28.274 "trsvcid": "42578" 00:10:28.274 }, 00:10:28.274 "auth": { 00:10:28.274 "state": "completed", 00:10:28.274 "digest": "sha384", 00:10:28.274 "dhgroup": "null" 00:10:28.274 } 00:10:28.274 } 00:10:28.274 ]' 00:10:28.274 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:28.533 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:28.533 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:28.533 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:10:28.533 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:28.533 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:28.533 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:28.533 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:28.791 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:28.791 09:33:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:29.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:29.360 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:29.928 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.929 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.929 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:29.929 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:30.187 00:10:30.187 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:30.187 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:30.187 09:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:30.447 { 00:10:30.447 "cntlid": 57, 00:10:30.447 "qid": 0, 00:10:30.447 "state": "enabled", 00:10:30.447 "thread": "nvmf_tgt_poll_group_000", 00:10:30.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:30.447 "listen_address": { 00:10:30.447 "trtype": "TCP", 00:10:30.447 "adrfam": "IPv4", 00:10:30.447 "traddr": "10.0.0.3", 00:10:30.447 "trsvcid": "4420" 00:10:30.447 }, 00:10:30.447 "peer_address": { 00:10:30.447 "trtype": "TCP", 00:10:30.447 "adrfam": "IPv4", 00:10:30.447 "traddr": "10.0.0.1", 00:10:30.447 "trsvcid": "42606" 00:10:30.447 }, 00:10:30.447 "auth": { 00:10:30.447 "state": "completed", 00:10:30.447 "digest": "sha384", 00:10:30.447 "dhgroup": "ffdhe2048" 00:10:30.447 } 00:10:30.447 } 00:10:30.447 ]' 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:30.447 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:30.705 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:30.705 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:30.705 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:30.965 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:30.965 09:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:31.532 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:31.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:31.532 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:31.532 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.532 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.532 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.532 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:31.533 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:31.533 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:31.791 09:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:32.050 00:10:32.307 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:32.307 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:32.307 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:32.566 { 00:10:32.566 "cntlid": 59, 00:10:32.566 "qid": 0, 00:10:32.566 "state": "enabled", 00:10:32.566 "thread": "nvmf_tgt_poll_group_000", 00:10:32.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:32.566 "listen_address": { 00:10:32.566 "trtype": "TCP", 00:10:32.566 "adrfam": "IPv4", 00:10:32.566 "traddr": "10.0.0.3", 00:10:32.566 "trsvcid": "4420" 00:10:32.566 }, 00:10:32.566 "peer_address": { 00:10:32.566 "trtype": "TCP", 00:10:32.566 "adrfam": "IPv4", 00:10:32.566 "traddr": "10.0.0.1", 00:10:32.566 "trsvcid": "42628" 00:10:32.566 }, 00:10:32.566 "auth": { 00:10:32.566 "state": "completed", 00:10:32.566 "digest": "sha384", 00:10:32.566 "dhgroup": "ffdhe2048" 00:10:32.566 } 00:10:32.566 } 00:10:32.566 ]' 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:32.566 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:33.164 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:33.164 09:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:33.732 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:33.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:33.732 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:33.732 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.732 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.732 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.732 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:33.732 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:33.733 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:33.992 09:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:34.558 00:10:34.558 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:34.558 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:34.558 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:34.817 { 00:10:34.817 "cntlid": 61, 00:10:34.817 "qid": 0, 00:10:34.817 "state": "enabled", 00:10:34.817 "thread": "nvmf_tgt_poll_group_000", 00:10:34.817 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:34.817 "listen_address": { 00:10:34.817 "trtype": "TCP", 00:10:34.817 "adrfam": "IPv4", 00:10:34.817 "traddr": "10.0.0.3", 00:10:34.817 "trsvcid": "4420" 00:10:34.817 }, 00:10:34.817 "peer_address": { 00:10:34.817 "trtype": "TCP", 00:10:34.817 "adrfam": "IPv4", 00:10:34.817 "traddr": "10.0.0.1", 00:10:34.817 "trsvcid": "42654" 00:10:34.817 }, 00:10:34.817 "auth": { 00:10:34.817 "state": "completed", 00:10:34.817 "digest": "sha384", 00:10:34.817 "dhgroup": "ffdhe2048" 00:10:34.817 } 00:10:34.817 } 00:10:34.817 ]' 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:34.817 09:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:35.076 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:35.076 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:36.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:36.012 09:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.271 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:36.530 00:10:36.530 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:36.530 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:36.530 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:36.788 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:36.788 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:36.788 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.788 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:36.788 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.788 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:36.788 { 00:10:36.788 "cntlid": 63, 00:10:36.788 "qid": 0, 00:10:36.788 "state": "enabled", 00:10:36.788 "thread": "nvmf_tgt_poll_group_000", 00:10:36.788 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:36.788 "listen_address": { 00:10:36.788 "trtype": "TCP", 00:10:36.788 "adrfam": "IPv4", 00:10:36.788 "traddr": "10.0.0.3", 00:10:36.788 "trsvcid": "4420" 00:10:36.788 }, 00:10:36.788 "peer_address": { 00:10:36.788 "trtype": "TCP", 00:10:36.788 "adrfam": "IPv4", 00:10:36.788 "traddr": "10.0.0.1", 00:10:36.789 "trsvcid": "33748" 00:10:36.789 }, 00:10:36.789 "auth": { 00:10:36.789 "state": "completed", 00:10:36.789 "digest": "sha384", 00:10:36.789 "dhgroup": "ffdhe2048" 00:10:36.789 } 00:10:36.789 } 00:10:36.789 ]' 00:10:36.789 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:36.789 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:36.789 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:37.047 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:10:37.047 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:37.047 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:37.047 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:37.047 09:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:37.305 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:37.305 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:37.873 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:38.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:38.132 09:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.391 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:38.650 00:10:38.651 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:38.651 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:38.651 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:38.909 { 00:10:38.909 "cntlid": 65, 00:10:38.909 "qid": 0, 00:10:38.909 "state": "enabled", 00:10:38.909 "thread": "nvmf_tgt_poll_group_000", 00:10:38.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:38.909 "listen_address": { 00:10:38.909 "trtype": "TCP", 00:10:38.909 "adrfam": "IPv4", 00:10:38.909 "traddr": "10.0.0.3", 00:10:38.909 "trsvcid": "4420" 00:10:38.909 }, 00:10:38.909 "peer_address": { 00:10:38.909 "trtype": "TCP", 00:10:38.909 "adrfam": "IPv4", 00:10:38.909 "traddr": "10.0.0.1", 00:10:38.909 "trsvcid": "33774" 00:10:38.909 }, 00:10:38.909 "auth": { 00:10:38.909 "state": "completed", 00:10:38.909 "digest": "sha384", 00:10:38.909 "dhgroup": "ffdhe3072" 00:10:38.909 } 00:10:38.909 } 00:10:38.909 ]' 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:38.909 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:39.168 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:39.168 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:39.168 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:39.168 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:39.168 09:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:39.427 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:39.427 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:40.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:40.363 09:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.363 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:40.929 00:10:40.929 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:40.929 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:40.929 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:41.188 { 00:10:41.188 "cntlid": 67, 00:10:41.188 "qid": 0, 00:10:41.188 "state": "enabled", 00:10:41.188 "thread": "nvmf_tgt_poll_group_000", 00:10:41.188 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:41.188 "listen_address": { 00:10:41.188 "trtype": "TCP", 00:10:41.188 "adrfam": "IPv4", 00:10:41.188 "traddr": "10.0.0.3", 00:10:41.188 "trsvcid": "4420" 00:10:41.188 }, 00:10:41.188 "peer_address": { 00:10:41.188 "trtype": "TCP", 00:10:41.188 "adrfam": "IPv4", 00:10:41.188 "traddr": "10.0.0.1", 00:10:41.188 "trsvcid": "33784" 00:10:41.188 }, 00:10:41.188 "auth": { 00:10:41.188 "state": "completed", 00:10:41.188 "digest": "sha384", 00:10:41.188 "dhgroup": "ffdhe3072" 00:10:41.188 } 00:10:41.188 } 00:10:41.188 ]' 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:41.188 09:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:41.188 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:41.188 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:41.188 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:41.188 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:41.188 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:41.447 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:41.447 09:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:42.382 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:42.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:42.382 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:42.382 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.383 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.383 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.383 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:42.383 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:42.383 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.641 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:42.900 00:10:42.900 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:42.900 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:42.900 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:43.158 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:43.158 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:43.158 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.158 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:43.158 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.158 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:43.158 { 00:10:43.158 "cntlid": 69, 00:10:43.158 "qid": 0, 00:10:43.158 "state": "enabled", 00:10:43.158 "thread": "nvmf_tgt_poll_group_000", 00:10:43.158 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:43.158 "listen_address": { 00:10:43.158 "trtype": "TCP", 00:10:43.158 "adrfam": "IPv4", 00:10:43.158 "traddr": "10.0.0.3", 00:10:43.158 "trsvcid": "4420" 00:10:43.158 }, 00:10:43.158 "peer_address": { 00:10:43.158 "trtype": "TCP", 00:10:43.158 "adrfam": "IPv4", 00:10:43.158 "traddr": "10.0.0.1", 00:10:43.158 "trsvcid": "33818" 00:10:43.158 }, 00:10:43.158 "auth": { 00:10:43.158 "state": "completed", 00:10:43.158 "digest": "sha384", 00:10:43.159 "dhgroup": "ffdhe3072" 00:10:43.159 } 00:10:43.159 } 00:10:43.159 ]' 00:10:43.159 09:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:43.159 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:43.159 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:43.159 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:43.159 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:43.417 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:43.417 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:43.417 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:43.675 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:43.675 09:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:44.240 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:44.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:44.240 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:44.240 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.240 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.241 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.241 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:44.241 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:44.241 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:44.499 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:45.065 00:10:45.065 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:45.065 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:45.065 09:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:45.323 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:45.323 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:45.323 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.323 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:45.323 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.323 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:45.323 { 00:10:45.323 "cntlid": 71, 00:10:45.323 "qid": 0, 00:10:45.323 "state": "enabled", 00:10:45.323 "thread": "nvmf_tgt_poll_group_000", 00:10:45.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:45.323 "listen_address": { 00:10:45.323 "trtype": "TCP", 00:10:45.323 "adrfam": "IPv4", 00:10:45.323 "traddr": "10.0.0.3", 00:10:45.323 "trsvcid": "4420" 00:10:45.323 }, 00:10:45.323 "peer_address": { 00:10:45.323 "trtype": "TCP", 00:10:45.323 "adrfam": "IPv4", 00:10:45.323 "traddr": "10.0.0.1", 00:10:45.323 "trsvcid": "33840" 00:10:45.323 }, 00:10:45.323 "auth": { 00:10:45.323 "state": "completed", 00:10:45.323 "digest": "sha384", 00:10:45.323 "dhgroup": "ffdhe3072" 00:10:45.323 } 00:10:45.323 } 00:10:45.323 ]' 00:10:45.324 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:45.324 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:45.324 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:45.324 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:10:45.324 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:45.581 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:45.581 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:45.582 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:45.869 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:45.869 09:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:46.437 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:46.437 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.696 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:46.954 00:10:46.954 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:46.954 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:46.954 09:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:47.212 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:47.212 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:47.212 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.212 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:47.212 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.212 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:47.212 { 00:10:47.212 "cntlid": 73, 00:10:47.212 "qid": 0, 00:10:47.212 "state": "enabled", 00:10:47.212 "thread": "nvmf_tgt_poll_group_000", 00:10:47.212 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:47.212 "listen_address": { 00:10:47.212 "trtype": "TCP", 00:10:47.212 "adrfam": "IPv4", 00:10:47.212 "traddr": "10.0.0.3", 00:10:47.212 "trsvcid": "4420" 00:10:47.212 }, 00:10:47.212 "peer_address": { 00:10:47.212 "trtype": "TCP", 00:10:47.212 "adrfam": "IPv4", 00:10:47.212 "traddr": "10.0.0.1", 00:10:47.212 "trsvcid": "43582" 00:10:47.212 }, 00:10:47.212 "auth": { 00:10:47.212 "state": "completed", 00:10:47.212 "digest": "sha384", 00:10:47.212 "dhgroup": "ffdhe4096" 00:10:47.212 } 00:10:47.212 } 00:10:47.212 ]' 00:10:47.212 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:47.470 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:47.470 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:47.470 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:47.470 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:47.470 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:47.470 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:47.470 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:47.729 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:47.729 09:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:48.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:48.295 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.553 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:48.811 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.811 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.812 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:48.812 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:49.070 00:10:49.070 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:49.070 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:49.070 09:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:49.329 { 00:10:49.329 "cntlid": 75, 00:10:49.329 "qid": 0, 00:10:49.329 "state": "enabled", 00:10:49.329 "thread": "nvmf_tgt_poll_group_000", 00:10:49.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:49.329 "listen_address": { 00:10:49.329 "trtype": "TCP", 00:10:49.329 "adrfam": "IPv4", 00:10:49.329 "traddr": "10.0.0.3", 00:10:49.329 "trsvcid": "4420" 00:10:49.329 }, 00:10:49.329 "peer_address": { 00:10:49.329 "trtype": "TCP", 00:10:49.329 "adrfam": "IPv4", 00:10:49.329 "traddr": "10.0.0.1", 00:10:49.329 "trsvcid": "43616" 00:10:49.329 }, 00:10:49.329 "auth": { 00:10:49.329 "state": "completed", 00:10:49.329 "digest": "sha384", 00:10:49.329 "dhgroup": "ffdhe4096" 00:10:49.329 } 00:10:49.329 } 00:10:49.329 ]' 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:49.329 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:49.588 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:49.588 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:49.588 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:49.588 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:49.588 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:49.847 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:49.847 09:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:50.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:50.414 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:50.673 09:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:51.240 00:10:51.240 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:51.240 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:51.240 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:51.498 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:51.498 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:51.498 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:51.499 { 00:10:51.499 "cntlid": 77, 00:10:51.499 "qid": 0, 00:10:51.499 "state": "enabled", 00:10:51.499 "thread": "nvmf_tgt_poll_group_000", 00:10:51.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:51.499 "listen_address": { 00:10:51.499 "trtype": "TCP", 00:10:51.499 "adrfam": "IPv4", 00:10:51.499 "traddr": "10.0.0.3", 00:10:51.499 "trsvcid": "4420" 00:10:51.499 }, 00:10:51.499 "peer_address": { 00:10:51.499 "trtype": "TCP", 00:10:51.499 "adrfam": "IPv4", 00:10:51.499 "traddr": "10.0.0.1", 00:10:51.499 "trsvcid": "43632" 00:10:51.499 }, 00:10:51.499 "auth": { 00:10:51.499 "state": "completed", 00:10:51.499 "digest": "sha384", 00:10:51.499 "dhgroup": "ffdhe4096" 00:10:51.499 } 00:10:51.499 } 00:10:51.499 ]' 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:51.499 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:52.066 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:52.066 09:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:52.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:52.633 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:52.892 09:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:10:53.460 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:53.460 { 00:10:53.460 "cntlid": 79, 00:10:53.460 "qid": 0, 00:10:53.460 "state": "enabled", 00:10:53.460 "thread": "nvmf_tgt_poll_group_000", 00:10:53.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:53.460 "listen_address": { 00:10:53.460 "trtype": "TCP", 00:10:53.460 "adrfam": "IPv4", 00:10:53.460 "traddr": "10.0.0.3", 00:10:53.460 "trsvcid": "4420" 00:10:53.460 }, 00:10:53.460 "peer_address": { 00:10:53.460 "trtype": "TCP", 00:10:53.460 "adrfam": "IPv4", 00:10:53.460 "traddr": "10.0.0.1", 00:10:53.460 "trsvcid": "43654" 00:10:53.460 }, 00:10:53.460 "auth": { 00:10:53.460 "state": "completed", 00:10:53.460 "digest": "sha384", 00:10:53.460 "dhgroup": "ffdhe4096" 00:10:53.460 } 00:10:53.460 } 00:10:53.460 ]' 00:10:53.460 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:53.719 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:53.719 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:53.719 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:10:53.719 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:53.719 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:53.719 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:53.719 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:53.978 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:53.978 09:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:54.913 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:54.913 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.172 09:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:10:55.739 00:10:55.739 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:55.739 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:55.739 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:55.997 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:55.998 { 00:10:55.998 "cntlid": 81, 00:10:55.998 "qid": 0, 00:10:55.998 "state": "enabled", 00:10:55.998 "thread": "nvmf_tgt_poll_group_000", 00:10:55.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:55.998 "listen_address": { 00:10:55.998 "trtype": "TCP", 00:10:55.998 "adrfam": "IPv4", 00:10:55.998 "traddr": "10.0.0.3", 00:10:55.998 "trsvcid": "4420" 00:10:55.998 }, 00:10:55.998 "peer_address": { 00:10:55.998 "trtype": "TCP", 00:10:55.998 "adrfam": "IPv4", 00:10:55.998 "traddr": "10.0.0.1", 00:10:55.998 "trsvcid": "55626" 00:10:55.998 }, 00:10:55.998 "auth": { 00:10:55.998 "state": "completed", 00:10:55.998 "digest": "sha384", 00:10:55.998 "dhgroup": "ffdhe6144" 00:10:55.998 } 00:10:55.998 } 00:10:55.998 ]' 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:55.998 09:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:56.256 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:56.256 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:57.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:57.192 09:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.451 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:10:57.709 00:10:57.709 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:57.709 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:57.709 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:10:57.968 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:10:57.968 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:10:57.968 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.968 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:57.968 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.968 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:10:57.968 { 00:10:57.968 "cntlid": 83, 00:10:57.968 "qid": 0, 00:10:57.968 "state": "enabled", 00:10:57.968 "thread": "nvmf_tgt_poll_group_000", 00:10:57.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:10:57.968 "listen_address": { 00:10:57.968 "trtype": "TCP", 00:10:57.968 "adrfam": "IPv4", 00:10:57.968 "traddr": "10.0.0.3", 00:10:57.968 "trsvcid": "4420" 00:10:57.968 }, 00:10:57.968 "peer_address": { 00:10:57.968 "trtype": "TCP", 00:10:57.968 "adrfam": "IPv4", 00:10:57.968 "traddr": "10.0.0.1", 00:10:57.968 "trsvcid": "55658" 00:10:57.968 }, 00:10:57.968 "auth": { 00:10:57.968 "state": "completed", 00:10:57.968 "digest": "sha384", 00:10:57.968 "dhgroup": "ffdhe6144" 00:10:57.968 } 00:10:57.968 } 00:10:57.968 ]' 00:10:57.968 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:10:58.227 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:10:58.227 09:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:10:58.227 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:10:58.227 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:10:58.227 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:10:58.227 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:10:58.227 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:10:58.574 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:58.574 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:10:59.141 09:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:10:59.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:10:59.141 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:10:59.141 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.141 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.141 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.141 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:10:59.141 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:59.141 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.708 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:10:59.709 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.709 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.709 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.709 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:10:59.967 00:10:59.967 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:10:59.967 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:10:59.967 09:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:00.225 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:00.225 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:00.225 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.225 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.225 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.225 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:00.225 { 00:11:00.225 "cntlid": 85, 00:11:00.225 "qid": 0, 00:11:00.225 "state": "enabled", 00:11:00.225 "thread": "nvmf_tgt_poll_group_000", 00:11:00.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:00.225 "listen_address": { 00:11:00.225 "trtype": "TCP", 00:11:00.225 "adrfam": "IPv4", 00:11:00.225 "traddr": "10.0.0.3", 00:11:00.225 "trsvcid": "4420" 00:11:00.225 }, 00:11:00.225 "peer_address": { 00:11:00.225 "trtype": "TCP", 00:11:00.225 "adrfam": "IPv4", 00:11:00.225 "traddr": "10.0.0.1", 00:11:00.226 "trsvcid": "55682" 00:11:00.226 }, 00:11:00.226 "auth": { 00:11:00.226 "state": "completed", 00:11:00.226 "digest": "sha384", 00:11:00.226 "dhgroup": "ffdhe6144" 00:11:00.226 } 00:11:00.226 } 00:11:00.226 ]' 00:11:00.226 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:00.484 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:00.484 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:00.484 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:00.484 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:00.484 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:00.484 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:00.484 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:00.742 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:00.743 09:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:01.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:01.310 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:01.877 09:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:02.136 00:11:02.136 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:02.136 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:02.136 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:02.394 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:02.394 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:02.394 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.394 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:02.394 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.394 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:02.394 { 00:11:02.394 "cntlid": 87, 00:11:02.394 "qid": 0, 00:11:02.394 "state": "enabled", 00:11:02.394 "thread": "nvmf_tgt_poll_group_000", 00:11:02.394 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:02.394 "listen_address": { 00:11:02.394 "trtype": "TCP", 00:11:02.394 "adrfam": "IPv4", 00:11:02.394 "traddr": "10.0.0.3", 00:11:02.394 "trsvcid": "4420" 00:11:02.394 }, 00:11:02.394 "peer_address": { 00:11:02.394 "trtype": "TCP", 00:11:02.394 "adrfam": "IPv4", 00:11:02.394 "traddr": "10.0.0.1", 00:11:02.394 "trsvcid": "55704" 00:11:02.394 }, 00:11:02.394 "auth": { 00:11:02.394 "state": "completed", 00:11:02.394 "digest": "sha384", 00:11:02.394 "dhgroup": "ffdhe6144" 00:11:02.394 } 00:11:02.394 } 00:11:02.394 ]' 00:11:02.394 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:02.654 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:02.654 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:02.654 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:02.654 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:02.654 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:02.654 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:02.654 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:02.913 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:02.913 09:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:03.855 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:03.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:03.856 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.122 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.123 09:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:04.690 00:11:04.690 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:04.690 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:04.690 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:04.949 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:04.949 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:04.949 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.949 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:04.949 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.949 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:04.949 { 00:11:04.949 "cntlid": 89, 00:11:04.949 "qid": 0, 00:11:04.949 "state": "enabled", 00:11:04.949 "thread": "nvmf_tgt_poll_group_000", 00:11:04.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:04.949 "listen_address": { 00:11:04.949 "trtype": "TCP", 00:11:04.949 "adrfam": "IPv4", 00:11:04.949 "traddr": "10.0.0.3", 00:11:04.949 "trsvcid": "4420" 00:11:04.949 }, 00:11:04.949 "peer_address": { 00:11:04.949 "trtype": "TCP", 00:11:04.949 "adrfam": "IPv4", 00:11:04.949 "traddr": "10.0.0.1", 00:11:04.949 "trsvcid": "55738" 00:11:04.949 }, 00:11:04.949 "auth": { 00:11:04.949 "state": "completed", 00:11:04.949 "digest": "sha384", 00:11:04.949 "dhgroup": "ffdhe8192" 00:11:04.949 } 00:11:04.949 } 00:11:04.949 ]' 00:11:04.950 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:05.209 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:05.209 09:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:05.209 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:05.209 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:05.209 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:05.209 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:05.209 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:05.468 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:05.468 09:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:06.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:06.404 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:06.663 09:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:07.230 00:11:07.230 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:07.230 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:07.230 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:07.489 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:07.489 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:07.489 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.489 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:07.489 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.489 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:07.489 { 00:11:07.489 "cntlid": 91, 00:11:07.489 "qid": 0, 00:11:07.489 "state": "enabled", 00:11:07.489 "thread": "nvmf_tgt_poll_group_000", 00:11:07.489 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:07.489 "listen_address": { 00:11:07.489 "trtype": "TCP", 00:11:07.489 "adrfam": "IPv4", 00:11:07.489 "traddr": "10.0.0.3", 00:11:07.489 "trsvcid": "4420" 00:11:07.489 }, 00:11:07.489 "peer_address": { 00:11:07.489 "trtype": "TCP", 00:11:07.489 "adrfam": "IPv4", 00:11:07.489 "traddr": "10.0.0.1", 00:11:07.489 "trsvcid": "54118" 00:11:07.489 }, 00:11:07.489 "auth": { 00:11:07.489 "state": "completed", 00:11:07.489 "digest": "sha384", 00:11:07.489 "dhgroup": "ffdhe8192" 00:11:07.489 } 00:11:07.489 } 00:11:07.489 ]' 00:11:07.489 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:07.490 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:07.490 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:07.749 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:07.749 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:07.749 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:07.749 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:07.749 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:08.008 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:08.008 09:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:08.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.944 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:09.203 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.203 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.203 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.203 09:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:09.769 00:11:09.769 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:09.769 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:09.769 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:10.028 { 00:11:10.028 "cntlid": 93, 00:11:10.028 "qid": 0, 00:11:10.028 "state": "enabled", 00:11:10.028 "thread": "nvmf_tgt_poll_group_000", 00:11:10.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:10.028 "listen_address": { 00:11:10.028 "trtype": "TCP", 00:11:10.028 "adrfam": "IPv4", 00:11:10.028 "traddr": "10.0.0.3", 00:11:10.028 "trsvcid": "4420" 00:11:10.028 }, 00:11:10.028 "peer_address": { 00:11:10.028 "trtype": "TCP", 00:11:10.028 "adrfam": "IPv4", 00:11:10.028 "traddr": "10.0.0.1", 00:11:10.028 "trsvcid": "54138" 00:11:10.028 }, 00:11:10.028 "auth": { 00:11:10.028 "state": "completed", 00:11:10.028 "digest": "sha384", 00:11:10.028 "dhgroup": "ffdhe8192" 00:11:10.028 } 00:11:10.028 } 00:11:10.028 ]' 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:10.028 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:10.287 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:10.287 09:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:10.287 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:10.287 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:10.287 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:10.546 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:10.546 09:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:11.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:11.501 09:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:12.079 00:11:12.338 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:12.338 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:12.338 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:12.597 { 00:11:12.597 "cntlid": 95, 00:11:12.597 "qid": 0, 00:11:12.597 "state": "enabled", 00:11:12.597 "thread": "nvmf_tgt_poll_group_000", 00:11:12.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:12.597 "listen_address": { 00:11:12.597 "trtype": "TCP", 00:11:12.597 "adrfam": "IPv4", 00:11:12.597 "traddr": "10.0.0.3", 00:11:12.597 "trsvcid": "4420" 00:11:12.597 }, 00:11:12.597 "peer_address": { 00:11:12.597 "trtype": "TCP", 00:11:12.597 "adrfam": "IPv4", 00:11:12.597 "traddr": "10.0.0.1", 00:11:12.597 "trsvcid": "54176" 00:11:12.597 }, 00:11:12.597 "auth": { 00:11:12.597 "state": "completed", 00:11:12.597 "digest": "sha384", 00:11:12.597 "dhgroup": "ffdhe8192" 00:11:12.597 } 00:11:12.597 } 00:11:12.597 ]' 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:12.597 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:12.856 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:12.856 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:12.856 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:13.115 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:13.115 09:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:13.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:13.683 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:13.942 09:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:14.200 00:11:14.200 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:14.200 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:14.200 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:14.459 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:14.459 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:14.459 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.459 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:14.718 { 00:11:14.718 "cntlid": 97, 00:11:14.718 "qid": 0, 00:11:14.718 "state": "enabled", 00:11:14.718 "thread": "nvmf_tgt_poll_group_000", 00:11:14.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:14.718 "listen_address": { 00:11:14.718 "trtype": "TCP", 00:11:14.718 "adrfam": "IPv4", 00:11:14.718 "traddr": "10.0.0.3", 00:11:14.718 "trsvcid": "4420" 00:11:14.718 }, 00:11:14.718 "peer_address": { 00:11:14.718 "trtype": "TCP", 00:11:14.718 "adrfam": "IPv4", 00:11:14.718 "traddr": "10.0.0.1", 00:11:14.718 "trsvcid": "54196" 00:11:14.718 }, 00:11:14.718 "auth": { 00:11:14.718 "state": "completed", 00:11:14.718 "digest": "sha512", 00:11:14.718 "dhgroup": "null" 00:11:14.718 } 00:11:14.718 } 00:11:14.718 ]' 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:14.718 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:14.976 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:14.976 09:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:15.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:15.912 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:15.913 09:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:16.171 00:11:16.171 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:16.171 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:16.171 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:16.739 { 00:11:16.739 "cntlid": 99, 00:11:16.739 "qid": 0, 00:11:16.739 "state": "enabled", 00:11:16.739 "thread": "nvmf_tgt_poll_group_000", 00:11:16.739 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:16.739 "listen_address": { 00:11:16.739 "trtype": "TCP", 00:11:16.739 "adrfam": "IPv4", 00:11:16.739 "traddr": "10.0.0.3", 00:11:16.739 "trsvcid": "4420" 00:11:16.739 }, 00:11:16.739 "peer_address": { 00:11:16.739 "trtype": "TCP", 00:11:16.739 "adrfam": "IPv4", 00:11:16.739 "traddr": "10.0.0.1", 00:11:16.739 "trsvcid": "58844" 00:11:16.739 }, 00:11:16.739 "auth": { 00:11:16.739 "state": "completed", 00:11:16.739 "digest": "sha512", 00:11:16.739 "dhgroup": "null" 00:11:16.739 } 00:11:16.739 } 00:11:16.739 ]' 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:16.739 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.998 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:16.998 09:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:17.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:17.933 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.192 09:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:18.450 00:11:18.450 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:18.450 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:18.450 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:18.708 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:18.708 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:18.708 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.708 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.708 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.708 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:18.708 { 00:11:18.708 "cntlid": 101, 00:11:18.708 "qid": 0, 00:11:18.708 "state": "enabled", 00:11:18.708 "thread": "nvmf_tgt_poll_group_000", 00:11:18.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:18.708 "listen_address": { 00:11:18.708 "trtype": "TCP", 00:11:18.708 "adrfam": "IPv4", 00:11:18.708 "traddr": "10.0.0.3", 00:11:18.708 "trsvcid": "4420" 00:11:18.708 }, 00:11:18.708 "peer_address": { 00:11:18.708 "trtype": "TCP", 00:11:18.708 "adrfam": "IPv4", 00:11:18.708 "traddr": "10.0.0.1", 00:11:18.708 "trsvcid": "58872" 00:11:18.708 }, 00:11:18.708 "auth": { 00:11:18.708 "state": "completed", 00:11:18.708 "digest": "sha512", 00:11:18.708 "dhgroup": "null" 00:11:18.708 } 00:11:18.708 } 00:11:18.708 ]' 00:11:18.708 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:18.966 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:18.966 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:18.966 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:18.966 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:18.966 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:18.966 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:18.966 09:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:19.224 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:19.224 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:20.158 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:20.158 09:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.417 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:20.675 00:11:20.675 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:20.675 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:20.675 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.241 { 00:11:21.241 "cntlid": 103, 00:11:21.241 "qid": 0, 00:11:21.241 "state": "enabled", 00:11:21.241 "thread": "nvmf_tgt_poll_group_000", 00:11:21.241 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:21.241 "listen_address": { 00:11:21.241 "trtype": "TCP", 00:11:21.241 "adrfam": "IPv4", 00:11:21.241 "traddr": "10.0.0.3", 00:11:21.241 "trsvcid": "4420" 00:11:21.241 }, 00:11:21.241 "peer_address": { 00:11:21.241 "trtype": "TCP", 00:11:21.241 "adrfam": "IPv4", 00:11:21.241 "traddr": "10.0.0.1", 00:11:21.241 "trsvcid": "58890" 00:11:21.241 }, 00:11:21.241 "auth": { 00:11:21.241 "state": "completed", 00:11:21.241 "digest": "sha512", 00:11:21.241 "dhgroup": "null" 00:11:21.241 } 00:11:21.241 } 00:11:21.241 ]' 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:21.241 09:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.241 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:21.241 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.241 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.241 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.241 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.499 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:21.499 09:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.434 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:22.434 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.692 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:22.951 00:11:23.209 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.209 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.209 09:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.469 { 00:11:23.469 "cntlid": 105, 00:11:23.469 "qid": 0, 00:11:23.469 "state": "enabled", 00:11:23.469 "thread": "nvmf_tgt_poll_group_000", 00:11:23.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:23.469 "listen_address": { 00:11:23.469 "trtype": "TCP", 00:11:23.469 "adrfam": "IPv4", 00:11:23.469 "traddr": "10.0.0.3", 00:11:23.469 "trsvcid": "4420" 00:11:23.469 }, 00:11:23.469 "peer_address": { 00:11:23.469 "trtype": "TCP", 00:11:23.469 "adrfam": "IPv4", 00:11:23.469 "traddr": "10.0.0.1", 00:11:23.469 "trsvcid": "58926" 00:11:23.469 }, 00:11:23.469 "auth": { 00:11:23.469 "state": "completed", 00:11:23.469 "digest": "sha512", 00:11:23.469 "dhgroup": "ffdhe2048" 00:11:23.469 } 00:11:23.469 } 00:11:23.469 ]' 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.469 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.727 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:23.727 09:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.660 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:24.660 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:24.919 09:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:25.177 00:11:25.177 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.177 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.177 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.435 { 00:11:25.435 "cntlid": 107, 00:11:25.435 "qid": 0, 00:11:25.435 "state": "enabled", 00:11:25.435 "thread": "nvmf_tgt_poll_group_000", 00:11:25.435 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:25.435 "listen_address": { 00:11:25.435 "trtype": "TCP", 00:11:25.435 "adrfam": "IPv4", 00:11:25.435 "traddr": "10.0.0.3", 00:11:25.435 "trsvcid": "4420" 00:11:25.435 }, 00:11:25.435 "peer_address": { 00:11:25.435 "trtype": "TCP", 00:11:25.435 "adrfam": "IPv4", 00:11:25.435 "traddr": "10.0.0.1", 00:11:25.435 "trsvcid": "58944" 00:11:25.435 }, 00:11:25.435 "auth": { 00:11:25.435 "state": "completed", 00:11:25.435 "digest": "sha512", 00:11:25.435 "dhgroup": "ffdhe2048" 00:11:25.435 } 00:11:25.435 } 00:11:25.435 ]' 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.435 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:25.694 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.694 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:25.694 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.694 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.694 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.694 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:25.952 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:25.952 09:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.886 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:26.886 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.145 09:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:27.403 00:11:27.403 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.403 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.403 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.662 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.662 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.662 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.662 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.662 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.662 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.662 { 00:11:27.662 "cntlid": 109, 00:11:27.662 "qid": 0, 00:11:27.662 "state": "enabled", 00:11:27.662 "thread": "nvmf_tgt_poll_group_000", 00:11:27.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:27.662 "listen_address": { 00:11:27.662 "trtype": "TCP", 00:11:27.662 "adrfam": "IPv4", 00:11:27.662 "traddr": "10.0.0.3", 00:11:27.662 "trsvcid": "4420" 00:11:27.662 }, 00:11:27.662 "peer_address": { 00:11:27.662 "trtype": "TCP", 00:11:27.662 "adrfam": "IPv4", 00:11:27.662 "traddr": "10.0.0.1", 00:11:27.662 "trsvcid": "37144" 00:11:27.662 }, 00:11:27.662 "auth": { 00:11:27.662 "state": "completed", 00:11:27.662 "digest": "sha512", 00:11:27.662 "dhgroup": "ffdhe2048" 00:11:27.662 } 00:11:27.662 } 00:11:27.662 ]' 00:11:27.662 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.922 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:27.922 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.922 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:27.922 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.922 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.922 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.922 09:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.181 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:28.181 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:29.117 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:29.117 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:29.117 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:29.117 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.118 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.118 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.118 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:29.118 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:29.118 09:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:29.387 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:29.659 00:11:29.659 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.659 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.659 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.917 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.917 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.917 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.917 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.918 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.918 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.918 { 00:11:29.918 "cntlid": 111, 00:11:29.918 "qid": 0, 00:11:29.918 "state": "enabled", 00:11:29.918 "thread": "nvmf_tgt_poll_group_000", 00:11:29.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:29.918 "listen_address": { 00:11:29.918 "trtype": "TCP", 00:11:29.918 "adrfam": "IPv4", 00:11:29.918 "traddr": "10.0.0.3", 00:11:29.918 "trsvcid": "4420" 00:11:29.918 }, 00:11:29.918 "peer_address": { 00:11:29.918 "trtype": "TCP", 00:11:29.918 "adrfam": "IPv4", 00:11:29.918 "traddr": "10.0.0.1", 00:11:29.918 "trsvcid": "37182" 00:11:29.918 }, 00:11:29.918 "auth": { 00:11:29.918 "state": "completed", 00:11:29.918 "digest": "sha512", 00:11:29.918 "dhgroup": "ffdhe2048" 00:11:29.918 } 00:11:29.918 } 00:11:29.918 ]' 00:11:29.918 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:30.176 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:30.176 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:30.176 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:30.176 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:30.176 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:30.176 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:30.176 09:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:30.434 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:30.434 09:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:31.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:31.368 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:31.626 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:32.192 00:11:32.192 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:32.192 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:32.192 09:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:32.450 { 00:11:32.450 "cntlid": 113, 00:11:32.450 "qid": 0, 00:11:32.450 "state": "enabled", 00:11:32.450 "thread": "nvmf_tgt_poll_group_000", 00:11:32.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:32.450 "listen_address": { 00:11:32.450 "trtype": "TCP", 00:11:32.450 "adrfam": "IPv4", 00:11:32.450 "traddr": "10.0.0.3", 00:11:32.450 "trsvcid": "4420" 00:11:32.450 }, 00:11:32.450 "peer_address": { 00:11:32.450 "trtype": "TCP", 00:11:32.450 "adrfam": "IPv4", 00:11:32.450 "traddr": "10.0.0.1", 00:11:32.450 "trsvcid": "37192" 00:11:32.450 }, 00:11:32.450 "auth": { 00:11:32.450 "state": "completed", 00:11:32.450 "digest": "sha512", 00:11:32.450 "dhgroup": "ffdhe3072" 00:11:32.450 } 00:11:32.450 } 00:11:32.450 ]' 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:32.450 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:32.451 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:32.451 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:32.451 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:32.451 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:32.708 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:32.708 09:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:33.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:33.641 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:33.898 09:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:34.155 00:11:34.155 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:34.155 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:34.155 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:34.413 { 00:11:34.413 "cntlid": 115, 00:11:34.413 "qid": 0, 00:11:34.413 "state": "enabled", 00:11:34.413 "thread": "nvmf_tgt_poll_group_000", 00:11:34.413 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:34.413 "listen_address": { 00:11:34.413 "trtype": "TCP", 00:11:34.413 "adrfam": "IPv4", 00:11:34.413 "traddr": "10.0.0.3", 00:11:34.413 "trsvcid": "4420" 00:11:34.413 }, 00:11:34.413 "peer_address": { 00:11:34.413 "trtype": "TCP", 00:11:34.413 "adrfam": "IPv4", 00:11:34.413 "traddr": "10.0.0.1", 00:11:34.413 "trsvcid": "37214" 00:11:34.413 }, 00:11:34.413 "auth": { 00:11:34.413 "state": "completed", 00:11:34.413 "digest": "sha512", 00:11:34.413 "dhgroup": "ffdhe3072" 00:11:34.413 } 00:11:34.413 } 00:11:34.413 ]' 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:34.413 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:34.671 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:34.671 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:34.671 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:34.671 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:34.671 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:34.929 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:34.929 09:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:35.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:35.862 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.120 09:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:36.685 00:11:36.685 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:36.685 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:36.685 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:36.943 { 00:11:36.943 "cntlid": 117, 00:11:36.943 "qid": 0, 00:11:36.943 "state": "enabled", 00:11:36.943 "thread": "nvmf_tgt_poll_group_000", 00:11:36.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:36.943 "listen_address": { 00:11:36.943 "trtype": "TCP", 00:11:36.943 "adrfam": "IPv4", 00:11:36.943 "traddr": "10.0.0.3", 00:11:36.943 "trsvcid": "4420" 00:11:36.943 }, 00:11:36.943 "peer_address": { 00:11:36.943 "trtype": "TCP", 00:11:36.943 "adrfam": "IPv4", 00:11:36.943 "traddr": "10.0.0.1", 00:11:36.943 "trsvcid": "42956" 00:11:36.943 }, 00:11:36.943 "auth": { 00:11:36.943 "state": "completed", 00:11:36.943 "digest": "sha512", 00:11:36.943 "dhgroup": "ffdhe3072" 00:11:36.943 } 00:11:36.943 } 00:11:36.943 ]' 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:36.943 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.201 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:37.201 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.201 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.201 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.201 09:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.458 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:37.458 09:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.391 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:38.391 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.649 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:38.906 00:11:38.906 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:38.906 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:38.906 09:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.164 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.164 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.164 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.164 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.421 { 00:11:39.421 "cntlid": 119, 00:11:39.421 "qid": 0, 00:11:39.421 "state": "enabled", 00:11:39.421 "thread": "nvmf_tgt_poll_group_000", 00:11:39.421 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:39.421 "listen_address": { 00:11:39.421 "trtype": "TCP", 00:11:39.421 "adrfam": "IPv4", 00:11:39.421 "traddr": "10.0.0.3", 00:11:39.421 "trsvcid": "4420" 00:11:39.421 }, 00:11:39.421 "peer_address": { 00:11:39.421 "trtype": "TCP", 00:11:39.421 "adrfam": "IPv4", 00:11:39.421 "traddr": "10.0.0.1", 00:11:39.421 "trsvcid": "42984" 00:11:39.421 }, 00:11:39.421 "auth": { 00:11:39.421 "state": "completed", 00:11:39.421 "digest": "sha512", 00:11:39.421 "dhgroup": "ffdhe3072" 00:11:39.421 } 00:11:39.421 } 00:11:39.421 ]' 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.421 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:39.678 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:39.679 09:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:40.611 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:40.869 09:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:41.127 00:11:41.127 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.127 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.127 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.692 { 00:11:41.692 "cntlid": 121, 00:11:41.692 "qid": 0, 00:11:41.692 "state": "enabled", 00:11:41.692 "thread": "nvmf_tgt_poll_group_000", 00:11:41.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:41.692 "listen_address": { 00:11:41.692 "trtype": "TCP", 00:11:41.692 "adrfam": "IPv4", 00:11:41.692 "traddr": "10.0.0.3", 00:11:41.692 "trsvcid": "4420" 00:11:41.692 }, 00:11:41.692 "peer_address": { 00:11:41.692 "trtype": "TCP", 00:11:41.692 "adrfam": "IPv4", 00:11:41.692 "traddr": "10.0.0.1", 00:11:41.692 "trsvcid": "43020" 00:11:41.692 }, 00:11:41.692 "auth": { 00:11:41.692 "state": "completed", 00:11:41.692 "digest": "sha512", 00:11:41.692 "dhgroup": "ffdhe4096" 00:11:41.692 } 00:11:41.692 } 00:11:41.692 ]' 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.692 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.950 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:41.950 09:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:42.602 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.861 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:42.861 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.861 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.861 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.861 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.861 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:42.861 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.120 09:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:43.378 00:11:43.378 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.378 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.378 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.637 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.637 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.637 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.637 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.637 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.637 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.637 { 00:11:43.637 "cntlid": 123, 00:11:43.637 "qid": 0, 00:11:43.637 "state": "enabled", 00:11:43.637 "thread": "nvmf_tgt_poll_group_000", 00:11:43.637 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:43.637 "listen_address": { 00:11:43.637 "trtype": "TCP", 00:11:43.637 "adrfam": "IPv4", 00:11:43.637 "traddr": "10.0.0.3", 00:11:43.637 "trsvcid": "4420" 00:11:43.637 }, 00:11:43.637 "peer_address": { 00:11:43.637 "trtype": "TCP", 00:11:43.637 "adrfam": "IPv4", 00:11:43.637 "traddr": "10.0.0.1", 00:11:43.637 "trsvcid": "43064" 00:11:43.637 }, 00:11:43.637 "auth": { 00:11:43.637 "state": "completed", 00:11:43.637 "digest": "sha512", 00:11:43.637 "dhgroup": "ffdhe4096" 00:11:43.637 } 00:11:43.637 } 00:11:43.637 ]' 00:11:43.637 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.913 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:43.913 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.913 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:43.913 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.913 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.913 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.913 09:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.172 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:44.172 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:45.109 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:45.109 09:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.109 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:45.675 00:11:45.676 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.676 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.676 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.934 { 00:11:45.934 "cntlid": 125, 00:11:45.934 "qid": 0, 00:11:45.934 "state": "enabled", 00:11:45.934 "thread": "nvmf_tgt_poll_group_000", 00:11:45.934 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:45.934 "listen_address": { 00:11:45.934 "trtype": "TCP", 00:11:45.934 "adrfam": "IPv4", 00:11:45.934 "traddr": "10.0.0.3", 00:11:45.934 "trsvcid": "4420" 00:11:45.934 }, 00:11:45.934 "peer_address": { 00:11:45.934 "trtype": "TCP", 00:11:45.934 "adrfam": "IPv4", 00:11:45.934 "traddr": "10.0.0.1", 00:11:45.934 "trsvcid": "40760" 00:11:45.934 }, 00:11:45.934 "auth": { 00:11:45.934 "state": "completed", 00:11:45.934 "digest": "sha512", 00:11:45.934 "dhgroup": "ffdhe4096" 00:11:45.934 } 00:11:45.934 } 00:11:45.934 ]' 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:45.934 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:46.193 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:46.193 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:46.193 09:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:46.451 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:46.452 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:47.019 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:47.019 09:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.587 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:47.846 00:11:47.846 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.846 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.846 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:48.104 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:48.104 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:48.104 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.104 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.104 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.104 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:48.104 { 00:11:48.104 "cntlid": 127, 00:11:48.104 "qid": 0, 00:11:48.104 "state": "enabled", 00:11:48.104 "thread": "nvmf_tgt_poll_group_000", 00:11:48.104 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:48.104 "listen_address": { 00:11:48.104 "trtype": "TCP", 00:11:48.104 "adrfam": "IPv4", 00:11:48.104 "traddr": "10.0.0.3", 00:11:48.104 "trsvcid": "4420" 00:11:48.104 }, 00:11:48.104 "peer_address": { 00:11:48.104 "trtype": "TCP", 00:11:48.104 "adrfam": "IPv4", 00:11:48.104 "traddr": "10.0.0.1", 00:11:48.104 "trsvcid": "40792" 00:11:48.104 }, 00:11:48.104 "auth": { 00:11:48.104 "state": "completed", 00:11:48.104 "digest": "sha512", 00:11:48.104 "dhgroup": "ffdhe4096" 00:11:48.104 } 00:11:48.105 } 00:11:48.105 ]' 00:11:48.105 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:48.105 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:48.105 09:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:48.105 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:48.105 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:48.364 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:48.364 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:48.364 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.622 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:48.622 09:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:49.189 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:49.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:49.189 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:49.189 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.189 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.189 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.190 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:49.190 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:49.190 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:49.190 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:49.449 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:50.016 00:11:50.016 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:50.016 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:50.016 09:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:50.584 { 00:11:50.584 "cntlid": 129, 00:11:50.584 "qid": 0, 00:11:50.584 "state": "enabled", 00:11:50.584 "thread": "nvmf_tgt_poll_group_000", 00:11:50.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:50.584 "listen_address": { 00:11:50.584 "trtype": "TCP", 00:11:50.584 "adrfam": "IPv4", 00:11:50.584 "traddr": "10.0.0.3", 00:11:50.584 "trsvcid": "4420" 00:11:50.584 }, 00:11:50.584 "peer_address": { 00:11:50.584 "trtype": "TCP", 00:11:50.584 "adrfam": "IPv4", 00:11:50.584 "traddr": "10.0.0.1", 00:11:50.584 "trsvcid": "40818" 00:11:50.584 }, 00:11:50.584 "auth": { 00:11:50.584 "state": "completed", 00:11:50.584 "digest": "sha512", 00:11:50.584 "dhgroup": "ffdhe6144" 00:11:50.584 } 00:11:50.584 } 00:11:50.584 ]' 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:50.584 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.842 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:50.842 09:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:11:51.777 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.778 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:51.778 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.778 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.778 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.778 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.778 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:51.778 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:52.036 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:11:52.036 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:52.036 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:52.036 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:52.036 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:52.036 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:52.037 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.037 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.037 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.037 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.037 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.037 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.037 09:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:52.306 00:11:52.306 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:52.306 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:52.306 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:52.874 { 00:11:52.874 "cntlid": 131, 00:11:52.874 "qid": 0, 00:11:52.874 "state": "enabled", 00:11:52.874 "thread": "nvmf_tgt_poll_group_000", 00:11:52.874 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:52.874 "listen_address": { 00:11:52.874 "trtype": "TCP", 00:11:52.874 "adrfam": "IPv4", 00:11:52.874 "traddr": "10.0.0.3", 00:11:52.874 "trsvcid": "4420" 00:11:52.874 }, 00:11:52.874 "peer_address": { 00:11:52.874 "trtype": "TCP", 00:11:52.874 "adrfam": "IPv4", 00:11:52.874 "traddr": "10.0.0.1", 00:11:52.874 "trsvcid": "40850" 00:11:52.874 }, 00:11:52.874 "auth": { 00:11:52.874 "state": "completed", 00:11:52.874 "digest": "sha512", 00:11:52.874 "dhgroup": "ffdhe6144" 00:11:52.874 } 00:11:52.874 } 00:11:52.874 ]' 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.874 09:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:53.133 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:53.133 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:54.070 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:54.070 09:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.328 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:54.895 00:11:54.895 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:54.895 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:54.895 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:55.153 { 00:11:55.153 "cntlid": 133, 00:11:55.153 "qid": 0, 00:11:55.153 "state": "enabled", 00:11:55.153 "thread": "nvmf_tgt_poll_group_000", 00:11:55.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:55.153 "listen_address": { 00:11:55.153 "trtype": "TCP", 00:11:55.153 "adrfam": "IPv4", 00:11:55.153 "traddr": "10.0.0.3", 00:11:55.153 "trsvcid": "4420" 00:11:55.153 }, 00:11:55.153 "peer_address": { 00:11:55.153 "trtype": "TCP", 00:11:55.153 "adrfam": "IPv4", 00:11:55.153 "traddr": "10.0.0.1", 00:11:55.153 "trsvcid": "40882" 00:11:55.153 }, 00:11:55.153 "auth": { 00:11:55.153 "state": "completed", 00:11:55.153 "digest": "sha512", 00:11:55.153 "dhgroup": "ffdhe6144" 00:11:55.153 } 00:11:55.153 } 00:11:55.153 ]' 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:55.153 09:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:55.153 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:55.153 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:55.153 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:55.153 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:55.153 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:55.720 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:55.720 09:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:56.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:56.286 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:56.545 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.112 00:11:57.112 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.112 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.112 09:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.370 { 00:11:57.370 "cntlid": 135, 00:11:57.370 "qid": 0, 00:11:57.370 "state": "enabled", 00:11:57.370 "thread": "nvmf_tgt_poll_group_000", 00:11:57.370 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:57.370 "listen_address": { 00:11:57.370 "trtype": "TCP", 00:11:57.370 "adrfam": "IPv4", 00:11:57.370 "traddr": "10.0.0.3", 00:11:57.370 "trsvcid": "4420" 00:11:57.370 }, 00:11:57.370 "peer_address": { 00:11:57.370 "trtype": "TCP", 00:11:57.370 "adrfam": "IPv4", 00:11:57.370 "traddr": "10.0.0.1", 00:11:57.370 "trsvcid": "42926" 00:11:57.370 }, 00:11:57.370 "auth": { 00:11:57.370 "state": "completed", 00:11:57.370 "digest": "sha512", 00:11:57.370 "dhgroup": "ffdhe6144" 00:11:57.370 } 00:11:57.370 } 00:11:57.370 ]' 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:57.370 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:57.937 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:57.937 09:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:58.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:58.504 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:58.763 09:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.329 00:11:59.588 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.588 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:59.588 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:59.847 { 00:11:59.847 "cntlid": 137, 00:11:59.847 "qid": 0, 00:11:59.847 "state": "enabled", 00:11:59.847 "thread": "nvmf_tgt_poll_group_000", 00:11:59.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:11:59.847 "listen_address": { 00:11:59.847 "trtype": "TCP", 00:11:59.847 "adrfam": "IPv4", 00:11:59.847 "traddr": "10.0.0.3", 00:11:59.847 "trsvcid": "4420" 00:11:59.847 }, 00:11:59.847 "peer_address": { 00:11:59.847 "trtype": "TCP", 00:11:59.847 "adrfam": "IPv4", 00:11:59.847 "traddr": "10.0.0.1", 00:11:59.847 "trsvcid": "42938" 00:11:59.847 }, 00:11:59.847 "auth": { 00:11:59.847 "state": "completed", 00:11:59.847 "digest": "sha512", 00:11:59.847 "dhgroup": "ffdhe8192" 00:11:59.847 } 00:11:59.847 } 00:11:59.847 ]' 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:59.847 09:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.107 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:12:00.107 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:01.042 09:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.301 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.867 00:12:01.867 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:01.867 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:01.867 09:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.126 { 00:12:02.126 "cntlid": 139, 00:12:02.126 "qid": 0, 00:12:02.126 "state": "enabled", 00:12:02.126 "thread": "nvmf_tgt_poll_group_000", 00:12:02.126 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:02.126 "listen_address": { 00:12:02.126 "trtype": "TCP", 00:12:02.126 "adrfam": "IPv4", 00:12:02.126 "traddr": "10.0.0.3", 00:12:02.126 "trsvcid": "4420" 00:12:02.126 }, 00:12:02.126 "peer_address": { 00:12:02.126 "trtype": "TCP", 00:12:02.126 "adrfam": "IPv4", 00:12:02.126 "traddr": "10.0.0.1", 00:12:02.126 "trsvcid": "42962" 00:12:02.126 }, 00:12:02.126 "auth": { 00:12:02.126 "state": "completed", 00:12:02.126 "digest": "sha512", 00:12:02.126 "dhgroup": "ffdhe8192" 00:12:02.126 } 00:12:02.126 } 00:12:02.126 ]' 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:02.126 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.386 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.386 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.386 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.386 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.386 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.645 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:12:02.645 09:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: --dhchap-ctrl-secret DHHC-1:02:ZjFmMzQyYTE4ZTViMTIyZGVhYmI1YmIzYzEyNDhkODg5Yzc1ZDEyZTg0ZGU5MDY06t4qeQ==: 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:03.214 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:03.782 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:12:03.782 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.783 09:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.350 00:12:04.350 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.350 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.350 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:04.609 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:04.610 { 00:12:04.610 "cntlid": 141, 00:12:04.610 "qid": 0, 00:12:04.610 "state": "enabled", 00:12:04.610 "thread": "nvmf_tgt_poll_group_000", 00:12:04.610 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:04.610 "listen_address": { 00:12:04.610 "trtype": "TCP", 00:12:04.610 "adrfam": "IPv4", 00:12:04.610 "traddr": "10.0.0.3", 00:12:04.610 "trsvcid": "4420" 00:12:04.610 }, 00:12:04.610 "peer_address": { 00:12:04.610 "trtype": "TCP", 00:12:04.610 "adrfam": "IPv4", 00:12:04.610 "traddr": "10.0.0.1", 00:12:04.610 "trsvcid": "42996" 00:12:04.610 }, 00:12:04.610 "auth": { 00:12:04.610 "state": "completed", 00:12:04.610 "digest": "sha512", 00:12:04.610 "dhgroup": "ffdhe8192" 00:12:04.610 } 00:12:04.610 } 00:12:04.610 ]' 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:04.610 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.177 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:12:05.177 09:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:01:MDEyNGQwOTBmMzBmYTRiYTMyM2VmNTJkZjM4MDZkOWI0yQ+k: 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:05.746 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:05.746 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.005 09:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.574 00:12:06.574 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:06.574 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:06.574 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.143 { 00:12:07.143 "cntlid": 143, 00:12:07.143 "qid": 0, 00:12:07.143 "state": "enabled", 00:12:07.143 "thread": "nvmf_tgt_poll_group_000", 00:12:07.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:07.143 "listen_address": { 00:12:07.143 "trtype": "TCP", 00:12:07.143 "adrfam": "IPv4", 00:12:07.143 "traddr": "10.0.0.3", 00:12:07.143 "trsvcid": "4420" 00:12:07.143 }, 00:12:07.143 "peer_address": { 00:12:07.143 "trtype": "TCP", 00:12:07.143 "adrfam": "IPv4", 00:12:07.143 "traddr": "10.0.0.1", 00:12:07.143 "trsvcid": "49022" 00:12:07.143 }, 00:12:07.143 "auth": { 00:12:07.143 "state": "completed", 00:12:07.143 "digest": "sha512", 00:12:07.143 "dhgroup": "ffdhe8192" 00:12:07.143 } 00:12:07.143 } 00:12:07.143 ]' 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:07.143 09:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.143 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.143 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.143 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.402 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:12:07.402 09:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.339 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:08.339 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.599 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:09.166 00:12:09.166 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:09.166 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:09.166 09:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.425 { 00:12:09.425 "cntlid": 145, 00:12:09.425 "qid": 0, 00:12:09.425 "state": "enabled", 00:12:09.425 "thread": "nvmf_tgt_poll_group_000", 00:12:09.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:09.425 "listen_address": { 00:12:09.425 "trtype": "TCP", 00:12:09.425 "adrfam": "IPv4", 00:12:09.425 "traddr": "10.0.0.3", 00:12:09.425 "trsvcid": "4420" 00:12:09.425 }, 00:12:09.425 "peer_address": { 00:12:09.425 "trtype": "TCP", 00:12:09.425 "adrfam": "IPv4", 00:12:09.425 "traddr": "10.0.0.1", 00:12:09.425 "trsvcid": "49040" 00:12:09.425 }, 00:12:09.425 "auth": { 00:12:09.425 "state": "completed", 00:12:09.425 "digest": "sha512", 00:12:09.425 "dhgroup": "ffdhe8192" 00:12:09.425 } 00:12:09.425 } 00:12:09.425 ]' 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:09.425 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.684 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:09.684 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.684 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.684 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.684 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.942 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:12:09.942 09:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:00:YTQ2NGQzMzY3ODczMTBkYjMyZWVkZWRiOWQ2ZGQ5MTc4ZjRiODE5YTVkMjIxOTJmx5yGVw==: --dhchap-ctrl-secret DHHC-1:03:ZjYyNjdhY2QyMzM3ZGEyMzFlOTJiNTI3N2FmNWU1MDhlN2ZmZjc1Nzg2ZjJiYTFjYzc2MmI3OTc5ZjU5MDNmMZ3MHVU=: 00:12:10.510 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.510 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:10.510 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.510 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:12:10.769 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:10.770 09:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:12:11.337 request: 00:12:11.337 { 00:12:11.337 "name": "nvme0", 00:12:11.337 "trtype": "tcp", 00:12:11.337 "traddr": "10.0.0.3", 00:12:11.337 "adrfam": "ipv4", 00:12:11.337 "trsvcid": "4420", 00:12:11.337 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:11.337 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:11.337 "prchk_reftag": false, 00:12:11.337 "prchk_guard": false, 00:12:11.337 "hdgst": false, 00:12:11.337 "ddgst": false, 00:12:11.337 "dhchap_key": "key2", 00:12:11.337 "allow_unrecognized_csi": false, 00:12:11.337 "method": "bdev_nvme_attach_controller", 00:12:11.337 "req_id": 1 00:12:11.337 } 00:12:11.337 Got JSON-RPC error response 00:12:11.337 response: 00:12:11.337 { 00:12:11.337 "code": -5, 00:12:11.337 "message": "Input/output error" 00:12:11.337 } 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:11.337 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:12:11.917 request: 00:12:11.917 { 00:12:11.917 "name": "nvme0", 00:12:11.917 "trtype": "tcp", 00:12:11.917 "traddr": "10.0.0.3", 00:12:11.917 "adrfam": "ipv4", 00:12:11.917 "trsvcid": "4420", 00:12:11.917 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:11.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:11.918 "prchk_reftag": false, 00:12:11.918 "prchk_guard": false, 00:12:11.918 "hdgst": false, 00:12:11.918 "ddgst": false, 00:12:11.918 "dhchap_key": "key1", 00:12:11.918 "dhchap_ctrlr_key": "ckey2", 00:12:11.918 "allow_unrecognized_csi": false, 00:12:11.918 "method": "bdev_nvme_attach_controller", 00:12:11.918 "req_id": 1 00:12:11.918 } 00:12:11.918 Got JSON-RPC error response 00:12:11.918 response: 00:12:11.918 { 00:12:11.918 "code": -5, 00:12:11.918 "message": "Input/output error" 00:12:11.918 } 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.918 09:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:12.514 request: 00:12:12.514 { 00:12:12.514 "name": "nvme0", 00:12:12.514 "trtype": "tcp", 00:12:12.514 "traddr": "10.0.0.3", 00:12:12.514 "adrfam": "ipv4", 00:12:12.514 "trsvcid": "4420", 00:12:12.514 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:12.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:12.514 "prchk_reftag": false, 00:12:12.514 "prchk_guard": false, 00:12:12.514 "hdgst": false, 00:12:12.514 "ddgst": false, 00:12:12.514 "dhchap_key": "key1", 00:12:12.514 "dhchap_ctrlr_key": "ckey1", 00:12:12.514 "allow_unrecognized_csi": false, 00:12:12.514 "method": "bdev_nvme_attach_controller", 00:12:12.514 "req_id": 1 00:12:12.514 } 00:12:12.514 Got JSON-RPC error response 00:12:12.514 response: 00:12:12.514 { 00:12:12.514 "code": -5, 00:12:12.514 "message": "Input/output error" 00:12:12.514 } 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 66936 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 66936 ']' 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 66936 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66936 00:12:12.514 killing process with pid 66936 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66936' 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 66936 00:12:12.514 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 66936 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=70089 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 70089 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70089 ']' 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:12.773 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 70089 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 70089 ']' 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:13.032 09:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 null0 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xy3 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.7cO ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7cO 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.7iv 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.2Cl ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2Cl 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.HoR 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.lnn ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.lnn 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.cOp 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:13.292 09:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.669 nvme0n1 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:14.669 { 00:12:14.669 "cntlid": 1, 00:12:14.669 "qid": 0, 00:12:14.669 "state": "enabled", 00:12:14.669 "thread": "nvmf_tgt_poll_group_000", 00:12:14.669 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:14.669 "listen_address": { 00:12:14.669 "trtype": "TCP", 00:12:14.669 "adrfam": "IPv4", 00:12:14.669 "traddr": "10.0.0.3", 00:12:14.669 "trsvcid": "4420" 00:12:14.669 }, 00:12:14.669 "peer_address": { 00:12:14.669 "trtype": "TCP", 00:12:14.669 "adrfam": "IPv4", 00:12:14.669 "traddr": "10.0.0.1", 00:12:14.669 "trsvcid": "49110" 00:12:14.669 }, 00:12:14.669 "auth": { 00:12:14.669 "state": "completed", 00:12:14.669 "digest": "sha512", 00:12:14.669 "dhgroup": "ffdhe8192" 00:12:14.669 } 00:12:14.669 } 00:12:14.669 ]' 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:14.669 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:14.928 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:14.928 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:14.928 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:14.928 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:14.928 09:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.187 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:12:15.187 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.123 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key3 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:12:16.123 09:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.382 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.641 request: 00:12:16.641 { 00:12:16.641 "name": "nvme0", 00:12:16.641 "trtype": "tcp", 00:12:16.641 "traddr": "10.0.0.3", 00:12:16.641 "adrfam": "ipv4", 00:12:16.641 "trsvcid": "4420", 00:12:16.641 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:16.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:16.641 "prchk_reftag": false, 00:12:16.641 "prchk_guard": false, 00:12:16.641 "hdgst": false, 00:12:16.641 "ddgst": false, 00:12:16.641 "dhchap_key": "key3", 00:12:16.641 "allow_unrecognized_csi": false, 00:12:16.641 "method": "bdev_nvme_attach_controller", 00:12:16.641 "req_id": 1 00:12:16.641 } 00:12:16.641 Got JSON-RPC error response 00:12:16.641 response: 00:12:16.641 { 00:12:16.641 "code": -5, 00:12:16.641 "message": "Input/output error" 00:12:16.641 } 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:16.641 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:16.900 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:17.159 request: 00:12:17.159 { 00:12:17.159 "name": "nvme0", 00:12:17.159 "trtype": "tcp", 00:12:17.159 "traddr": "10.0.0.3", 00:12:17.159 "adrfam": "ipv4", 00:12:17.159 "trsvcid": "4420", 00:12:17.159 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:17.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:17.159 "prchk_reftag": false, 00:12:17.159 "prchk_guard": false, 00:12:17.159 "hdgst": false, 00:12:17.159 "ddgst": false, 00:12:17.159 "dhchap_key": "key3", 00:12:17.159 "allow_unrecognized_csi": false, 00:12:17.159 "method": "bdev_nvme_attach_controller", 00:12:17.159 "req_id": 1 00:12:17.159 } 00:12:17.159 Got JSON-RPC error response 00:12:17.159 response: 00:12:17.159 { 00:12:17.159 "code": -5, 00:12:17.159 "message": "Input/output error" 00:12:17.159 } 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.159 09:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:17.419 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:17.986 request: 00:12:17.986 { 00:12:17.986 "name": "nvme0", 00:12:17.986 "trtype": "tcp", 00:12:17.986 "traddr": "10.0.0.3", 00:12:17.986 "adrfam": "ipv4", 00:12:17.986 "trsvcid": "4420", 00:12:17.986 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:17.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:17.986 "prchk_reftag": false, 00:12:17.986 "prchk_guard": false, 00:12:17.986 "hdgst": false, 00:12:17.986 "ddgst": false, 00:12:17.986 "dhchap_key": "key0", 00:12:17.986 "dhchap_ctrlr_key": "key1", 00:12:17.986 "allow_unrecognized_csi": false, 00:12:17.986 "method": "bdev_nvme_attach_controller", 00:12:17.986 "req_id": 1 00:12:17.986 } 00:12:17.986 Got JSON-RPC error response 00:12:17.986 response: 00:12:17.986 { 00:12:17.986 "code": -5, 00:12:17.986 "message": "Input/output error" 00:12:17.986 } 00:12:17.986 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:17.986 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:17.986 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:17.986 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:17.986 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:12:17.986 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:17.986 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:12:18.245 nvme0n1 00:12:18.245 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:12:18.245 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.245 09:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:12:18.503 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:18.503 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:18.503 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:18.761 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 00:12:18.761 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.761 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.761 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.761 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:18.761 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:18.761 09:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:19.695 nvme0n1 00:12:19.695 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:12:19.695 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.695 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:19.953 09:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:12:20.521 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:20.521 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:12:20.521 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid 5243355a-262e-4d66-b861-d6387f15e8f8 -l 0 --dhchap-secret DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: --dhchap-ctrl-secret DHHC-1:03:MmM0ZmMwYWUzZGI4MDI1NjFkYzI0ZjMwZTk0ZGFhZmY4ODBhNWRkMWU5MmFkOGNiM2U4OTQxYTVmY2NmOTY3MCl+M90=: 00:12:21.085 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.086 09:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:21.344 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:12:21.911 request: 00:12:21.911 { 00:12:21.911 "name": "nvme0", 00:12:21.911 "trtype": "tcp", 00:12:21.911 "traddr": "10.0.0.3", 00:12:21.911 "adrfam": "ipv4", 00:12:21.911 "trsvcid": "4420", 00:12:21.911 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:12:21.911 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8", 00:12:21.911 "prchk_reftag": false, 00:12:21.911 "prchk_guard": false, 00:12:21.911 "hdgst": false, 00:12:21.911 "ddgst": false, 00:12:21.911 "dhchap_key": "key1", 00:12:21.911 "allow_unrecognized_csi": false, 00:12:21.911 "method": "bdev_nvme_attach_controller", 00:12:21.911 "req_id": 1 00:12:21.911 } 00:12:21.911 Got JSON-RPC error response 00:12:21.911 response: 00:12:21.911 { 00:12:21.911 "code": -5, 00:12:21.911 "message": "Input/output error" 00:12:21.911 } 00:12:21.911 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:21.911 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:21.911 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:21.911 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:21.911 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:21.911 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:21.911 09:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:22.846 nvme0n1 00:12:22.846 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:12:22.846 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:12:22.846 09:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.104 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.104 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.104 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.670 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:23.670 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.670 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.670 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.670 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:12:23.670 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:23.670 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:12:23.928 nvme0n1 00:12:23.928 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:12:23.928 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:12:23.928 09:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:24.186 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:24.186 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:24.186 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: '' 2s 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: ]] 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZjdmOGQ4NDU4YWZhZmUzOGI3OTA3Y2E5MDY5NWM5YjZgce9P: 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:24.444 09:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:26.344 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:12:26.344 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:26.344 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:26.344 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:26.344 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:26.344 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key1 --dhchap-ctrlr-key key2 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: 2s 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: ]] 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NTNkYWE1ZmI2Y2VjODg2NTkyMWZkNzE0ODIyZWQ4ZjE5NDQ4NjllMzE0ZmUxMTAyDUX0GQ==: 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:12:26.602 09:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:28.504 09:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:29.441 nvme0n1 00:12:29.441 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:29.441 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.441 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.441 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.441 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:29.441 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:30.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:12:30.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:12:30.376 09:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.376 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:30.376 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:30.376 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.376 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.376 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.376 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:12:30.376 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:12:30.634 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:12:30.635 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:30.635 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:31.201 09:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:12:31.768 request: 00:12:31.768 { 00:12:31.768 "name": "nvme0", 00:12:31.768 "dhchap_key": "key1", 00:12:31.768 "dhchap_ctrlr_key": "key3", 00:12:31.768 "method": "bdev_nvme_set_keys", 00:12:31.768 "req_id": 1 00:12:31.768 } 00:12:31.768 Got JSON-RPC error response 00:12:31.768 response: 00:12:31.768 { 00:12:31.768 "code": -13, 00:12:31.768 "message": "Permission denied" 00:12:31.768 } 00:12:31.768 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:31.768 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:31.768 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:31.768 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:31.768 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:31.768 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:31.768 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:32.026 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:12:32.026 09:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:12:32.962 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:12:32.962 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:12:32.962 09:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key0 --dhchap-ctrlr-key key1 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:33.221 09:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:12:34.157 nvme0n1 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --dhchap-key key2 --dhchap-ctrlr-key key3 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:34.157 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:12:34.724 request: 00:12:34.724 { 00:12:34.724 "name": "nvme0", 00:12:34.724 "dhchap_key": "key2", 00:12:34.724 "dhchap_ctrlr_key": "key0", 00:12:34.724 "method": "bdev_nvme_set_keys", 00:12:34.724 "req_id": 1 00:12:34.724 } 00:12:34.724 Got JSON-RPC error response 00:12:34.724 response: 00:12:34.724 { 00:12:34.724 "code": -13, 00:12:34.724 "message": "Permission denied" 00:12:34.724 } 00:12:34.983 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:12:34.983 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:34.983 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:34.983 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:34.983 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:34.983 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:34.983 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:35.241 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:12:35.241 09:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:12:36.177 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:12:36.178 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:12:36.178 09:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 66961 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 66961 ']' 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 66961 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 66961 00:12:36.437 killing process with pid 66961 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 66961' 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 66961 00:12:36.437 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 66961 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:36.728 rmmod nvme_tcp 00:12:36.728 rmmod nvme_fabrics 00:12:36.728 rmmod nvme_keyring 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 70089 ']' 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 70089 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 70089 ']' 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 70089 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:36.728 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70089 00:12:37.013 killing process with pid 70089 00:12:37.013 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70089' 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 70089 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 70089 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:37.014 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:37.273 09:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Xy3 /tmp/spdk.key-sha256.7iv /tmp/spdk.key-sha384.HoR /tmp/spdk.key-sha512.cOp /tmp/spdk.key-sha512.7cO /tmp/spdk.key-sha384.2Cl /tmp/spdk.key-sha256.lnn '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:12:37.273 00:12:37.273 real 3m17.498s 00:12:37.273 user 7m56.072s 00:12:37.273 sys 0m28.568s 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.273 ************************************ 00:12:37.273 END TEST nvmf_auth_target 00:12:37.273 ************************************ 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:37.273 ************************************ 00:12:37.273 START TEST nvmf_bdevio_no_huge 00:12:37.273 ************************************ 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:12:37.273 * Looking for test storage... 00:12:37.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:12:37.273 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.533 --rc genhtml_branch_coverage=1 00:12:37.533 --rc genhtml_function_coverage=1 00:12:37.533 --rc genhtml_legend=1 00:12:37.533 --rc geninfo_all_blocks=1 00:12:37.533 --rc geninfo_unexecuted_blocks=1 00:12:37.533 00:12:37.533 ' 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.533 --rc genhtml_branch_coverage=1 00:12:37.533 --rc genhtml_function_coverage=1 00:12:37.533 --rc genhtml_legend=1 00:12:37.533 --rc geninfo_all_blocks=1 00:12:37.533 --rc geninfo_unexecuted_blocks=1 00:12:37.533 00:12:37.533 ' 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.533 --rc genhtml_branch_coverage=1 00:12:37.533 --rc genhtml_function_coverage=1 00:12:37.533 --rc genhtml_legend=1 00:12:37.533 --rc geninfo_all_blocks=1 00:12:37.533 --rc geninfo_unexecuted_blocks=1 00:12:37.533 00:12:37.533 ' 00:12:37.533 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:37.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:37.533 --rc genhtml_branch_coverage=1 00:12:37.534 --rc genhtml_function_coverage=1 00:12:37.534 --rc genhtml_legend=1 00:12:37.534 --rc geninfo_all_blocks=1 00:12:37.534 --rc geninfo_unexecuted_blocks=1 00:12:37.534 00:12:37.534 ' 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:37.534 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:37.534 Cannot find device "nvmf_init_br" 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:37.534 Cannot find device "nvmf_init_br2" 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:37.534 Cannot find device "nvmf_tgt_br" 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:37.534 Cannot find device "nvmf_tgt_br2" 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:37.534 Cannot find device "nvmf_init_br" 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:37.534 Cannot find device "nvmf_init_br2" 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:37.534 Cannot find device "nvmf_tgt_br" 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:12:37.534 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:37.535 Cannot find device "nvmf_tgt_br2" 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:37.535 Cannot find device "nvmf_br" 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:37.535 Cannot find device "nvmf_init_if" 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:37.535 Cannot find device "nvmf_init_if2" 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:12:37.535 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:37.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:37.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:37.794 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.794 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:37.794 00:12:37.794 --- 10.0.0.3 ping statistics --- 00:12:37.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.794 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:37.794 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:37.794 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:12:37.794 00:12:37.794 --- 10.0.0.4 ping statistics --- 00:12:37.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.794 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:37.794 00:12:37.794 --- 10.0.0.1 ping statistics --- 00:12:37.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.794 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:37.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:12:37.794 00:12:37.794 --- 10.0.0.2 ping statistics --- 00:12:37.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.794 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:37.794 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:37.795 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=70735 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 70735 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 70735 ']' 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:38.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:38.054 09:35:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:38.054 [2024-11-05 09:35:23.809530] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:12:38.054 [2024-11-05 09:35:23.809637] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:12:38.054 [2024-11-05 09:35:23.974783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:38.313 [2024-11-05 09:35:24.049245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:38.313 [2024-11-05 09:35:24.049311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:38.313 [2024-11-05 09:35:24.049325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:38.313 [2024-11-05 09:35:24.049335] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:38.313 [2024-11-05 09:35:24.049344] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:38.313 [2024-11-05 09:35:24.049956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:38.313 [2024-11-05 09:35:24.050104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:38.313 [2024-11-05 09:35:24.050318] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:38.313 [2024-11-05 09:35:24.050327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:38.313 [2024-11-05 09:35:24.055877] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:39.248 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 [2024-11-05 09:35:24.902737] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 Malloc0 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:39.249 [2024-11-05 09:35:24.942926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:39.249 { 00:12:39.249 "params": { 00:12:39.249 "name": "Nvme$subsystem", 00:12:39.249 "trtype": "$TEST_TRANSPORT", 00:12:39.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:39.249 "adrfam": "ipv4", 00:12:39.249 "trsvcid": "$NVMF_PORT", 00:12:39.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:39.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:39.249 "hdgst": ${hdgst:-false}, 00:12:39.249 "ddgst": ${ddgst:-false} 00:12:39.249 }, 00:12:39.249 "method": "bdev_nvme_attach_controller" 00:12:39.249 } 00:12:39.249 EOF 00:12:39.249 )") 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:12:39.249 09:35:24 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:39.249 "params": { 00:12:39.249 "name": "Nvme1", 00:12:39.249 "trtype": "tcp", 00:12:39.249 "traddr": "10.0.0.3", 00:12:39.249 "adrfam": "ipv4", 00:12:39.249 "trsvcid": "4420", 00:12:39.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:39.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:39.249 "hdgst": false, 00:12:39.249 "ddgst": false 00:12:39.249 }, 00:12:39.249 "method": "bdev_nvme_attach_controller" 00:12:39.249 }' 00:12:39.249 [2024-11-05 09:35:25.007657] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:12:39.249 [2024-11-05 09:35:25.007749] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid70771 ] 00:12:39.249 [2024-11-05 09:35:25.169219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:39.508 [2024-11-05 09:35:25.244648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.508 [2024-11-05 09:35:25.244780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.508 [2024-11-05 09:35:25.244789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:39.508 [2024-11-05 09:35:25.259248] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:39.508 I/O targets: 00:12:39.508 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:39.508 00:12:39.508 00:12:39.508 CUnit - A unit testing framework for C - Version 2.1-3 00:12:39.508 http://cunit.sourceforge.net/ 00:12:39.508 00:12:39.508 00:12:39.508 Suite: bdevio tests on: Nvme1n1 00:12:39.508 Test: blockdev write read block ...passed 00:12:39.508 Test: blockdev write zeroes read block ...passed 00:12:39.508 Test: blockdev write zeroes read no split ...passed 00:12:39.767 Test: blockdev write zeroes read split ...passed 00:12:39.767 Test: blockdev write zeroes read split partial ...passed 00:12:39.767 Test: blockdev reset ...[2024-11-05 09:35:25.485884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:39.767 [2024-11-05 09:35:25.485996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe42310 (9): Bad file descriptor 00:12:39.767 [2024-11-05 09:35:25.504198] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:39.767 passed 00:12:39.767 Test: blockdev write read 8 blocks ...passed 00:12:39.767 Test: blockdev write read size > 128k ...passed 00:12:39.767 Test: blockdev write read invalid size ...passed 00:12:39.767 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:39.767 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:39.767 Test: blockdev write read max offset ...passed 00:12:39.767 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:39.767 Test: blockdev writev readv 8 blocks ...passed 00:12:39.767 Test: blockdev writev readv 30 x 1block ...passed 00:12:39.767 Test: blockdev writev readv block ...passed 00:12:39.767 Test: blockdev writev readv size > 128k ...passed 00:12:39.767 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:39.767 Test: blockdev comparev and writev ...[2024-11-05 09:35:25.514077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.514116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.514136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.514147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.514425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.514444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.514461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.514471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.514782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.514800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.514818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.514828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:39.767 passed 00:12:39.767 Test: blockdev nvme passthru rw ...passed 00:12:39.767 Test: blockdev nvme passthru vendor specific ...passed 00:12:39.767 Test: blockdev nvme admin passthru ...[2024-11-05 09:35:25.515250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.515343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.515363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:39.767 [2024-11-05 09:35:25.515374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.516298] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:39.767 [2024-11-05 09:35:25.516325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.516437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:39.767 [2024-11-05 09:35:25.516454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.516550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:39.767 [2024-11-05 09:35:25.516566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:39.767 [2024-11-05 09:35:25.516675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:39.767 [2024-11-05 09:35:25.516693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:39.767 passed 00:12:39.767 Test: blockdev copy ...passed 00:12:39.767 00:12:39.767 Run Summary: Type Total Ran Passed Failed Inactive 00:12:39.767 suites 1 1 n/a 0 0 00:12:39.767 tests 23 23 23 0 0 00:12:39.767 asserts 152 152 152 0 n/a 00:12:39.767 00:12:39.767 Elapsed time = 0.180 seconds 00:12:40.026 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:40.027 rmmod nvme_tcp 00:12:40.027 rmmod nvme_fabrics 00:12:40.027 rmmod nvme_keyring 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 70735 ']' 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 70735 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 70735 ']' 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 70735 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70735 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:12:40.027 killing process with pid 70735 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70735' 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 70735 00:12:40.027 09:35:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 70735 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:40.595 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:12:40.854 00:12:40.854 real 0m3.520s 00:12:40.854 user 0m10.592s 00:12:40.854 sys 0m1.321s 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:12:40.854 ************************************ 00:12:40.854 END TEST nvmf_bdevio_no_huge 00:12:40.854 ************************************ 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:40.854 ************************************ 00:12:40.854 START TEST nvmf_tls 00:12:40.854 ************************************ 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:12:40.854 * Looking for test storage... 00:12:40.854 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:12:40.854 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:41.114 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:41.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.115 --rc genhtml_branch_coverage=1 00:12:41.115 --rc genhtml_function_coverage=1 00:12:41.115 --rc genhtml_legend=1 00:12:41.115 --rc geninfo_all_blocks=1 00:12:41.115 --rc geninfo_unexecuted_blocks=1 00:12:41.115 00:12:41.115 ' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:41.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.115 --rc genhtml_branch_coverage=1 00:12:41.115 --rc genhtml_function_coverage=1 00:12:41.115 --rc genhtml_legend=1 00:12:41.115 --rc geninfo_all_blocks=1 00:12:41.115 --rc geninfo_unexecuted_blocks=1 00:12:41.115 00:12:41.115 ' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:41.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.115 --rc genhtml_branch_coverage=1 00:12:41.115 --rc genhtml_function_coverage=1 00:12:41.115 --rc genhtml_legend=1 00:12:41.115 --rc geninfo_all_blocks=1 00:12:41.115 --rc geninfo_unexecuted_blocks=1 00:12:41.115 00:12:41.115 ' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:41.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:41.115 --rc genhtml_branch_coverage=1 00:12:41.115 --rc genhtml_function_coverage=1 00:12:41.115 --rc genhtml_legend=1 00:12:41.115 --rc geninfo_all_blocks=1 00:12:41.115 --rc geninfo_unexecuted_blocks=1 00:12:41.115 00:12:41.115 ' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:41.115 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:41.115 Cannot find device "nvmf_init_br" 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:12:41.115 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:41.116 Cannot find device "nvmf_init_br2" 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:41.116 Cannot find device "nvmf_tgt_br" 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:41.116 Cannot find device "nvmf_tgt_br2" 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:41.116 Cannot find device "nvmf_init_br" 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:41.116 Cannot find device "nvmf_init_br2" 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:41.116 Cannot find device "nvmf_tgt_br" 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:12:41.116 09:35:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:41.116 Cannot find device "nvmf_tgt_br2" 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:41.116 Cannot find device "nvmf_br" 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:41.116 Cannot find device "nvmf_init_if" 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:41.116 Cannot find device "nvmf_init_if2" 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:41.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:41.116 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:41.116 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:41.375 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:41.375 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:12:41.375 00:12:41.375 --- 10.0.0.3 ping statistics --- 00:12:41.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.375 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:41.375 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:41.375 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:12:41.375 00:12:41.375 --- 10.0.0.4 ping statistics --- 00:12:41.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.375 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:41.375 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.375 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:12:41.375 00:12:41.375 --- 10.0.0.1 ping statistics --- 00:12:41.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.375 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:41.375 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.375 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:41.375 00:12:41.375 --- 10.0.0.2 ping statistics --- 00:12:41.375 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.375 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.375 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71003 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71003 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71003 ']' 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:41.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:41.634 09:35:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:41.634 [2024-11-05 09:35:27.411687] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:12:41.634 [2024-11-05 09:35:27.411791] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:41.634 [2024-11-05 09:35:27.569288] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:41.893 [2024-11-05 09:35:27.607877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:41.893 [2024-11-05 09:35:27.607953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:41.893 [2024-11-05 09:35:27.607977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:41.893 [2024-11-05 09:35:27.607988] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:41.893 [2024-11-05 09:35:27.607996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:41.893 [2024-11-05 09:35:27.608383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.461 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:42.461 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:12:42.461 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.461 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:42.461 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:12:42.720 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.720 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:12:42.720 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:12:42.977 true 00:12:42.978 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:42.978 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:12:43.235 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:12:43.235 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:12:43.235 09:35:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:43.494 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:43.494 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:12:43.752 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:12:43.752 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:12:43.752 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:12:44.011 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:44.011 09:35:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:12:44.269 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:12:44.269 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:12:44.269 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:12:44.269 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:44.527 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:12:44.527 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:12:44.527 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:12:44.786 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:12:44.786 09:35:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:45.352 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:12:45.352 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:12:45.352 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:12:45.611 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:12:45.611 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.RHy2LBnUnY 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.ZeUxGJqE9X 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RHy2LBnUnY 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.ZeUxGJqE9X 00:12:45.883 09:35:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:12:46.142 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:12:46.710 [2024-11-05 09:35:32.370738] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:46.710 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.RHy2LBnUnY 00:12:46.710 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.RHy2LBnUnY 00:12:46.710 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:46.969 [2024-11-05 09:35:32.702629] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.969 09:35:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:47.227 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:12:47.486 [2024-11-05 09:35:33.310837] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:47.486 [2024-11-05 09:35:33.311130] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:47.486 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:47.745 malloc0 00:12:47.745 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:48.004 09:35:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.RHy2LBnUnY 00:12:48.572 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:12:48.572 09:35:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.RHy2LBnUnY 00:13:00.780 Initializing NVMe Controllers 00:13:00.780 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:13:00.780 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:00.780 Initialization complete. Launching workers. 00:13:00.780 ======================================================== 00:13:00.780 Latency(us) 00:13:00.780 Device Information : IOPS MiB/s Average min max 00:13:00.780 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9622.28 37.59 6652.70 2409.19 8483.41 00:13:00.780 ======================================================== 00:13:00.780 Total : 9622.28 37.59 6652.70 2409.19 8483.41 00:13:00.780 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RHy2LBnUnY 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RHy2LBnUnY 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71255 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71255 /var/tmp/bdevperf.sock 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71255 ']' 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:00.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:00.780 09:35:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:00.780 [2024-11-05 09:35:44.807782] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:00.780 [2024-11-05 09:35:44.808603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71255 ] 00:13:00.780 [2024-11-05 09:35:44.959086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.780 [2024-11-05 09:35:44.989756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:00.780 [2024-11-05 09:35:45.021494] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:00.780 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:00.780 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:00.780 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RHy2LBnUnY 00:13:00.780 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:00.780 [2024-11-05 09:35:45.574213] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:00.780 TLSTESTn1 00:13:00.780 09:35:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:00.781 Running I/O for 10 seconds... 00:13:01.976 4220.00 IOPS, 16.48 MiB/s [2024-11-05T09:35:48.870Z] 4112.50 IOPS, 16.06 MiB/s [2024-11-05T09:35:49.808Z] 4077.00 IOPS, 15.93 MiB/s [2024-11-05T09:35:51.186Z] 4108.25 IOPS, 16.05 MiB/s [2024-11-05T09:35:52.122Z] 4103.60 IOPS, 16.03 MiB/s [2024-11-05T09:35:53.058Z] 4124.17 IOPS, 16.11 MiB/s [2024-11-05T09:35:53.995Z] 4126.43 IOPS, 16.12 MiB/s [2024-11-05T09:35:54.946Z] 4128.75 IOPS, 16.13 MiB/s [2024-11-05T09:35:55.900Z] 4142.00 IOPS, 16.18 MiB/s [2024-11-05T09:35:55.900Z] 4139.80 IOPS, 16.17 MiB/s 00:13:09.942 Latency(us) 00:13:09.942 [2024-11-05T09:35:55.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.942 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:09.942 Verification LBA range: start 0x0 length 0x2000 00:13:09.942 TLSTESTn1 : 10.02 4142.20 16.18 0.00 0.00 30835.71 7536.64 24069.59 00:13:09.942 [2024-11-05T09:35:55.900Z] =================================================================================================================== 00:13:09.942 [2024-11-05T09:35:55.900Z] Total : 4142.20 16.18 0.00 0.00 30835.71 7536.64 24069.59 00:13:09.942 { 00:13:09.942 "results": [ 00:13:09.942 { 00:13:09.942 "job": "TLSTESTn1", 00:13:09.942 "core_mask": "0x4", 00:13:09.942 "workload": "verify", 00:13:09.942 "status": "finished", 00:13:09.942 "verify_range": { 00:13:09.942 "start": 0, 00:13:09.942 "length": 8192 00:13:09.942 }, 00:13:09.942 "queue_depth": 128, 00:13:09.942 "io_size": 4096, 00:13:09.942 "runtime": 10.024867, 00:13:09.942 "iops": 4142.199592273892, 00:13:09.942 "mibps": 16.180467157319892, 00:13:09.942 "io_failed": 0, 00:13:09.942 "io_timeout": 0, 00:13:09.942 "avg_latency_us": 30835.71277342236, 00:13:09.942 "min_latency_us": 7536.64, 00:13:09.942 "max_latency_us": 24069.585454545453 00:13:09.942 } 00:13:09.942 ], 00:13:09.942 "core_count": 1 00:13:09.942 } 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71255 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71255 ']' 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71255 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71255 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71255' 00:13:09.942 killing process with pid 71255 00:13:09.942 Received shutdown signal, test time was about 10.000000 seconds 00:13:09.942 00:13:09.942 Latency(us) 00:13:09.942 [2024-11-05T09:35:55.900Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:09.942 [2024-11-05T09:35:55.900Z] =================================================================================================================== 00:13:09.942 [2024-11-05T09:35:55.900Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71255 00:13:09.942 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71255 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZeUxGJqE9X 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZeUxGJqE9X 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:10.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.ZeUxGJqE9X 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.ZeUxGJqE9X 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71382 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71382 /var/tmp/bdevperf.sock 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71382 ']' 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:10.202 09:35:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:10.202 [2024-11-05 09:35:56.046364] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:10.202 [2024-11-05 09:35:56.046529] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71382 ] 00:13:10.461 [2024-11-05 09:35:56.191820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.461 [2024-11-05 09:35:56.223514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.461 [2024-11-05 09:35:56.253440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:10.461 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:10.461 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:10.461 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.ZeUxGJqE9X 00:13:10.720 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:10.979 [2024-11-05 09:35:56.832206] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:10.979 [2024-11-05 09:35:56.843283] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:10.979 [2024-11-05 09:35:56.843977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b4fb0 (107): Transport endpoint is not connected 00:13:10.979 [2024-11-05 09:35:56.844964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17b4fb0 (9): Bad file descriptor 00:13:10.979 [2024-11-05 09:35:56.845959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:10.979 [2024-11-05 09:35:56.846003] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:10.979 [2024-11-05 09:35:56.846013] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:10.979 [2024-11-05 09:35:56.846023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:10.979 request: 00:13:10.979 { 00:13:10.979 "name": "TLSTEST", 00:13:10.979 "trtype": "tcp", 00:13:10.979 "traddr": "10.0.0.3", 00:13:10.979 "adrfam": "ipv4", 00:13:10.979 "trsvcid": "4420", 00:13:10.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:10.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:10.979 "prchk_reftag": false, 00:13:10.979 "prchk_guard": false, 00:13:10.979 "hdgst": false, 00:13:10.979 "ddgst": false, 00:13:10.979 "psk": "key0", 00:13:10.979 "allow_unrecognized_csi": false, 00:13:10.979 "method": "bdev_nvme_attach_controller", 00:13:10.979 "req_id": 1 00:13:10.979 } 00:13:10.979 Got JSON-RPC error response 00:13:10.979 response: 00:13:10.979 { 00:13:10.979 "code": -5, 00:13:10.979 "message": "Input/output error" 00:13:10.979 } 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71382 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71382 ']' 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71382 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71382 00:13:10.979 killing process with pid 71382 00:13:10.979 Received shutdown signal, test time was about 10.000000 seconds 00:13:10.979 00:13:10.979 Latency(us) 00:13:10.979 [2024-11-05T09:35:56.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.979 [2024-11-05T09:35:56.937Z] =================================================================================================================== 00:13:10.979 [2024-11-05T09:35:56.937Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71382' 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71382 00:13:10.979 09:35:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71382 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RHy2LBnUnY 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RHy2LBnUnY 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.RHy2LBnUnY 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RHy2LBnUnY 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71403 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71403 /var/tmp/bdevperf.sock 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71403 ']' 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:11.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:11.238 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:11.238 [2024-11-05 09:35:57.122353] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:11.238 [2024-11-05 09:35:57.122468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71403 ] 00:13:11.497 [2024-11-05 09:35:57.284556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.497 [2024-11-05 09:35:57.318111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.497 [2024-11-05 09:35:57.348862] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:11.497 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.497 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:11.497 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RHy2LBnUnY 00:13:12.064 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:13:12.065 [2024-11-05 09:35:57.959028] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:12.065 [2024-11-05 09:35:57.964017] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:12.065 [2024-11-05 09:35:57.964063] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:13:12.065 [2024-11-05 09:35:57.964130] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:12.065 [2024-11-05 09:35:57.964809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170ffb0 (107): Transport endpoint is not connected 00:13:12.065 [2024-11-05 09:35:57.965753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x170ffb0 (9): Bad file descriptor 00:13:12.065 [2024-11-05 09:35:57.966749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:13:12.065 [2024-11-05 09:35:57.966774] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:12.065 [2024-11-05 09:35:57.966802] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:13:12.065 [2024-11-05 09:35:57.966813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:13:12.065 request: 00:13:12.065 { 00:13:12.065 "name": "TLSTEST", 00:13:12.065 "trtype": "tcp", 00:13:12.065 "traddr": "10.0.0.3", 00:13:12.065 "adrfam": "ipv4", 00:13:12.065 "trsvcid": "4420", 00:13:12.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:12.065 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:13:12.065 "prchk_reftag": false, 00:13:12.065 "prchk_guard": false, 00:13:12.065 "hdgst": false, 00:13:12.065 "ddgst": false, 00:13:12.065 "psk": "key0", 00:13:12.065 "allow_unrecognized_csi": false, 00:13:12.065 "method": "bdev_nvme_attach_controller", 00:13:12.065 "req_id": 1 00:13:12.065 } 00:13:12.065 Got JSON-RPC error response 00:13:12.065 response: 00:13:12.065 { 00:13:12.065 "code": -5, 00:13:12.065 "message": "Input/output error" 00:13:12.065 } 00:13:12.065 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71403 00:13:12.065 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71403 ']' 00:13:12.065 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71403 00:13:12.065 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:12.065 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:12.065 09:35:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71403 00:13:12.065 killing process with pid 71403 00:13:12.065 Received shutdown signal, test time was about 10.000000 seconds 00:13:12.065 00:13:12.065 Latency(us) 00:13:12.065 [2024-11-05T09:35:58.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:12.065 [2024-11-05T09:35:58.023Z] =================================================================================================================== 00:13:12.065 [2024-11-05T09:35:58.023Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:12.065 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:12.065 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:12.065 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71403' 00:13:12.065 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71403 00:13:12.065 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71403 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RHy2LBnUnY 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RHy2LBnUnY 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.334 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:12.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.RHy2LBnUnY 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.RHy2LBnUnY 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71424 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71424 /var/tmp/bdevperf.sock 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71424 ']' 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:12.335 09:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:12.335 [2024-11-05 09:35:58.206799] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:12.335 [2024-11-05 09:35:58.206888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71424 ] 00:13:12.605 [2024-11-05 09:35:58.350310] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.605 [2024-11-05 09:35:58.380327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.605 [2024-11-05 09:35:58.409440] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:13.540 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:13.540 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:13.540 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.RHy2LBnUnY 00:13:13.540 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:13.799 [2024-11-05 09:35:59.691933] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:13.799 [2024-11-05 09:35:59.703591] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:13.799 [2024-11-05 09:35:59.703827] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:13:13.799 [2024-11-05 09:35:59.704125] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:13:13.799 [2024-11-05 09:35:59.704422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dbfb0 (107): Transport endpoint is not connected 00:13:13.799 [2024-11-05 09:35:59.705405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21dbfb0 (9): Bad file descriptor 00:13:13.799 [2024-11-05 09:35:59.706416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:13:13.799 [2024-11-05 09:35:59.706808] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:13:13.799 [2024-11-05 09:35:59.707089] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:13:13.799 request: 00:13:13.799 { 00:13:13.799 "name": "TLSTEST", 00:13:13.799 "trtype": "tcp", 00:13:13.799 "traddr": "10.0.0.3", 00:13:13.799 "adrfam": "ipv4", 00:13:13.799 "trsvcid": "4420", 00:13:13.799 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:13:13.799 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:13.799 "prchk_reftag": false, 00:13:13.799 "prchk_guard": false, 00:13:13.799 "hdgst": false, 00:13:13.799 "ddgst": false, 00:13:13.799 "psk": "key0", 00:13:13.799 "allow_unrecognized_csi": false, 00:13:13.799 "method": "bdev_nvme_attach_controller", 00:13:13.799 "req_id": 1 00:13:13.799 } 00:13:13.799 Got JSON-RPC error response 00:13:13.799 response: 00:13:13.799 { 00:13:13.799 "code": -5, 00:13:13.799 "message": "Input/output error" 00:13:13.799 } 00:13:13.799 [2024-11-05 09:35:59.707465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:13:13.799 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71424 00:13:13.799 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71424 ']' 00:13:13.799 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71424 00:13:13.799 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:13.799 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:13.799 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71424 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71424' 00:13:14.058 killing process with pid 71424 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71424 00:13:14.058 Received shutdown signal, test time was about 10.000000 seconds 00:13:14.058 00:13:14.058 Latency(us) 00:13:14.058 [2024-11-05T09:36:00.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:14.058 [2024-11-05T09:36:00.016Z] =================================================================================================================== 00:13:14.058 [2024-11-05T09:36:00.016Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71424 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71453 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71453 /var/tmp/bdevperf.sock 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71453 ']' 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:14.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:14.058 09:35:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:14.058 [2024-11-05 09:35:59.957005] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:14.058 [2024-11-05 09:35:59.957154] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71453 ] 00:13:14.316 [2024-11-05 09:36:00.102783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.316 [2024-11-05 09:36:00.135747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.316 [2024-11-05 09:36:00.166520] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:14.316 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:14.316 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:14.316 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:13:14.575 [2024-11-05 09:36:00.501138] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:13:14.575 [2024-11-05 09:36:00.501705] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:14.575 request: 00:13:14.575 { 00:13:14.576 "name": "key0", 00:13:14.576 "path": "", 00:13:14.576 "method": "keyring_file_add_key", 00:13:14.576 "req_id": 1 00:13:14.576 } 00:13:14.576 Got JSON-RPC error response 00:13:14.576 response: 00:13:14.576 { 00:13:14.576 "code": -1, 00:13:14.576 "message": "Operation not permitted" 00:13:14.576 } 00:13:14.576 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:15.144 [2024-11-05 09:36:00.821379] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:15.144 [2024-11-05 09:36:00.821905] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:15.144 request: 00:13:15.144 { 00:13:15.144 "name": "TLSTEST", 00:13:15.144 "trtype": "tcp", 00:13:15.144 "traddr": "10.0.0.3", 00:13:15.144 "adrfam": "ipv4", 00:13:15.144 "trsvcid": "4420", 00:13:15.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:15.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:15.144 "prchk_reftag": false, 00:13:15.144 "prchk_guard": false, 00:13:15.144 "hdgst": false, 00:13:15.144 "ddgst": false, 00:13:15.144 "psk": "key0", 00:13:15.144 "allow_unrecognized_csi": false, 00:13:15.144 "method": "bdev_nvme_attach_controller", 00:13:15.144 "req_id": 1 00:13:15.144 } 00:13:15.144 Got JSON-RPC error response 00:13:15.144 response: 00:13:15.144 { 00:13:15.144 "code": -126, 00:13:15.144 "message": "Required key not available" 00:13:15.144 } 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71453 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71453 ']' 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71453 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71453 00:13:15.144 killing process with pid 71453 00:13:15.144 Received shutdown signal, test time was about 10.000000 seconds 00:13:15.144 00:13:15.144 Latency(us) 00:13:15.144 [2024-11-05T09:36:01.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:15.144 [2024-11-05T09:36:01.102Z] =================================================================================================================== 00:13:15.144 [2024-11-05T09:36:01.102Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71453' 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71453 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71453 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 71003 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71003 ']' 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71003 00:13:15.144 09:36:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:15.144 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:15.144 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71003 00:13:15.144 killing process with pid 71003 00:13:15.144 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:15.144 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:15.144 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71003' 00:13:15.144 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71003 00:13:15.144 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71003 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Tb7u5KQtIo 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Tb7u5KQtIo 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71484 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71484 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71484 ']' 00:13:15.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:15.403 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.403 [2024-11-05 09:36:01.309318] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:15.403 [2024-11-05 09:36:01.309449] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.661 [2024-11-05 09:36:01.460822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:15.661 [2024-11-05 09:36:01.493785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.661 [2024-11-05 09:36:01.494074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.661 [2024-11-05 09:36:01.494099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.661 [2024-11-05 09:36:01.494110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.661 [2024-11-05 09:36:01.494117] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.661 [2024-11-05 09:36:01.494535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.661 [2024-11-05 09:36:01.525888] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Tb7u5KQtIo 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tb7u5KQtIo 00:13:15.919 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:16.177 [2024-11-05 09:36:01.969182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:16.177 09:36:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:16.436 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:16.694 [2024-11-05 09:36:02.533412] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:16.694 [2024-11-05 09:36:02.533887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:16.694 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:16.952 malloc0 00:13:16.952 09:36:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:17.211 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:17.470 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tb7u5KQtIo 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Tb7u5KQtIo 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71538 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71538 /var/tmp/bdevperf.sock 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71538 ']' 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:17.728 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:17.728 09:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:17.728 [2024-11-05 09:36:03.674444] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:17.728 [2024-11-05 09:36:03.674720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71538 ] 00:13:18.026 [2024-11-05 09:36:03.826564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:18.026 [2024-11-05 09:36:03.867381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:18.026 [2024-11-05 09:36:03.903591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:18.992 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:18.992 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:18.992 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:18.992 09:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:19.251 [2024-11-05 09:36:05.138157] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:19.509 TLSTESTn1 00:13:19.509 09:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:19.509 Running I/O for 10 seconds... 00:13:21.386 3917.00 IOPS, 15.30 MiB/s [2024-11-05T09:36:08.720Z] 3967.00 IOPS, 15.50 MiB/s [2024-11-05T09:36:09.654Z] 4098.00 IOPS, 16.01 MiB/s [2024-11-05T09:36:10.589Z] 4122.00 IOPS, 16.10 MiB/s [2024-11-05T09:36:11.525Z] 4107.60 IOPS, 16.05 MiB/s [2024-11-05T09:36:12.460Z] 4093.50 IOPS, 15.99 MiB/s [2024-11-05T09:36:13.403Z] 4085.14 IOPS, 15.96 MiB/s [2024-11-05T09:36:14.779Z] 4086.00 IOPS, 15.96 MiB/s [2024-11-05T09:36:15.346Z] 4104.33 IOPS, 16.03 MiB/s [2024-11-05T09:36:15.610Z] 4099.70 IOPS, 16.01 MiB/s 00:13:29.652 Latency(us) 00:13:29.652 [2024-11-05T09:36:15.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.652 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:29.652 Verification LBA range: start 0x0 length 0x2000 00:13:29.652 TLSTESTn1 : 10.02 4105.10 16.04 0.00 0.00 31121.84 6404.65 33602.09 00:13:29.652 [2024-11-05T09:36:15.610Z] =================================================================================================================== 00:13:29.652 [2024-11-05T09:36:15.610Z] Total : 4105.10 16.04 0.00 0.00 31121.84 6404.65 33602.09 00:13:29.652 { 00:13:29.652 "results": [ 00:13:29.652 { 00:13:29.652 "job": "TLSTESTn1", 00:13:29.652 "core_mask": "0x4", 00:13:29.652 "workload": "verify", 00:13:29.652 "status": "finished", 00:13:29.652 "verify_range": { 00:13:29.652 "start": 0, 00:13:29.652 "length": 8192 00:13:29.652 }, 00:13:29.652 "queue_depth": 128, 00:13:29.652 "io_size": 4096, 00:13:29.652 "runtime": 10.017306, 00:13:29.652 "iops": 4105.095721344641, 00:13:29.652 "mibps": 16.035530161502503, 00:13:29.652 "io_failed": 0, 00:13:29.652 "io_timeout": 0, 00:13:29.652 "avg_latency_us": 31121.83797215381, 00:13:29.653 "min_latency_us": 6404.654545454546, 00:13:29.653 "max_latency_us": 33602.09454545454 00:13:29.653 } 00:13:29.653 ], 00:13:29.653 "core_count": 1 00:13:29.653 } 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 71538 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71538 ']' 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71538 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71538 00:13:29.653 killing process with pid 71538 00:13:29.653 Received shutdown signal, test time was about 10.000000 seconds 00:13:29.653 00:13:29.653 Latency(us) 00:13:29.653 [2024-11-05T09:36:15.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:29.653 [2024-11-05T09:36:15.611Z] =================================================================================================================== 00:13:29.653 [2024-11-05T09:36:15.611Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71538' 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71538 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71538 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Tb7u5KQtIo 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tb7u5KQtIo 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tb7u5KQtIo 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:13:29.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Tb7u5KQtIo 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Tb7u5KQtIo 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=71673 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 71673 /var/tmp/bdevperf.sock 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71673 ']' 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.653 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:29.653 [2024-11-05 09:36:15.607406] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:29.653 [2024-11-05 09:36:15.607506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71673 ] 00:13:29.912 [2024-11-05 09:36:15.755057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.912 [2024-11-05 09:36:15.788179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.912 [2024-11-05 09:36:15.821404] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:30.171 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:30.171 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:30.171 09:36:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:30.171 [2024-11-05 09:36:16.113747] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Tb7u5KQtIo': 0100666 00:13:30.171 [2024-11-05 09:36:16.113810] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:30.171 request: 00:13:30.171 { 00:13:30.171 "name": "key0", 00:13:30.171 "path": "/tmp/tmp.Tb7u5KQtIo", 00:13:30.171 "method": "keyring_file_add_key", 00:13:30.171 "req_id": 1 00:13:30.171 } 00:13:30.171 Got JSON-RPC error response 00:13:30.171 response: 00:13:30.171 { 00:13:30.171 "code": -1, 00:13:30.171 "message": "Operation not permitted" 00:13:30.171 } 00:13:30.429 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:30.689 [2024-11-05 09:36:16.417936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:30.689 [2024-11-05 09:36:16.418030] bdev_nvme.c:6620:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:13:30.689 request: 00:13:30.689 { 00:13:30.689 "name": "TLSTEST", 00:13:30.689 "trtype": "tcp", 00:13:30.689 "traddr": "10.0.0.3", 00:13:30.689 "adrfam": "ipv4", 00:13:30.689 "trsvcid": "4420", 00:13:30.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:30.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:30.689 "prchk_reftag": false, 00:13:30.689 "prchk_guard": false, 00:13:30.689 "hdgst": false, 00:13:30.689 "ddgst": false, 00:13:30.689 "psk": "key0", 00:13:30.689 "allow_unrecognized_csi": false, 00:13:30.689 "method": "bdev_nvme_attach_controller", 00:13:30.689 "req_id": 1 00:13:30.689 } 00:13:30.689 Got JSON-RPC error response 00:13:30.689 response: 00:13:30.689 { 00:13:30.689 "code": -126, 00:13:30.689 "message": "Required key not available" 00:13:30.689 } 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 71673 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71673 ']' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71673 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71673 00:13:30.689 killing process with pid 71673 00:13:30.689 Received shutdown signal, test time was about 10.000000 seconds 00:13:30.689 00:13:30.689 Latency(us) 00:13:30.689 [2024-11-05T09:36:16.647Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:30.689 [2024-11-05T09:36:16.647Z] =================================================================================================================== 00:13:30.689 [2024-11-05T09:36:16.647Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71673' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71673 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71673 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 71484 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71484 ']' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71484 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71484 00:13:30.689 killing process with pid 71484 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71484' 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71484 00:13:30.689 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71484 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71699 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71699 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71699 ']' 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:30.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:30.948 09:36:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:30.948 [2024-11-05 09:36:16.859516] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:30.948 [2024-11-05 09:36:16.859632] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.206 [2024-11-05 09:36:17.016500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.206 [2024-11-05 09:36:17.047474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.206 [2024-11-05 09:36:17.047541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.207 [2024-11-05 09:36:17.047569] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:31.207 [2024-11-05 09:36:17.047577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:31.207 [2024-11-05 09:36:17.047584] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.207 [2024-11-05 09:36:17.047905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:31.207 [2024-11-05 09:36:17.077380] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Tb7u5KQtIo 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Tb7u5KQtIo 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.Tb7u5KQtIo 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tb7u5KQtIo 00:13:32.141 09:36:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:32.400 [2024-11-05 09:36:18.148589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.400 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:32.658 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:32.916 [2024-11-05 09:36:18.736772] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:32.917 [2024-11-05 09:36:18.737088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:32.917 09:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:33.175 malloc0 00:13:33.175 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:33.433 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:34.001 [2024-11-05 09:36:19.672224] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Tb7u5KQtIo': 0100666 00:13:34.001 [2024-11-05 09:36:19.672266] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:13:34.001 request: 00:13:34.001 { 00:13:34.001 "name": "key0", 00:13:34.001 "path": "/tmp/tmp.Tb7u5KQtIo", 00:13:34.001 "method": "keyring_file_add_key", 00:13:34.001 "req_id": 1 00:13:34.001 } 00:13:34.001 Got JSON-RPC error response 00:13:34.001 response: 00:13:34.001 { 00:13:34.001 "code": -1, 00:13:34.001 "message": "Operation not permitted" 00:13:34.001 } 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:34.001 [2024-11-05 09:36:19.916331] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:13:34.001 [2024-11-05 09:36:19.916433] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:13:34.001 request: 00:13:34.001 { 00:13:34.001 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:34.001 "host": "nqn.2016-06.io.spdk:host1", 00:13:34.001 "psk": "key0", 00:13:34.001 "method": "nvmf_subsystem_add_host", 00:13:34.001 "req_id": 1 00:13:34.001 } 00:13:34.001 Got JSON-RPC error response 00:13:34.001 response: 00:13:34.001 { 00:13:34.001 "code": -32603, 00:13:34.001 "message": "Internal error" 00:13:34.001 } 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 71699 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71699 ']' 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71699 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:34.001 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71699 00:13:34.260 killing process with pid 71699 00:13:34.260 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:34.260 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:34.260 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71699' 00:13:34.260 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71699 00:13:34.260 09:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71699 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Tb7u5KQtIo 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71774 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71774 00:13:34.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71774 ']' 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:34.260 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.260 [2024-11-05 09:36:20.181494] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:34.260 [2024-11-05 09:36:20.181597] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.519 [2024-11-05 09:36:20.330792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.519 [2024-11-05 09:36:20.360181] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:34.519 [2024-11-05 09:36:20.360250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:34.519 [2024-11-05 09:36:20.360276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:34.519 [2024-11-05 09:36:20.360284] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:34.519 [2024-11-05 09:36:20.360291] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:34.519 [2024-11-05 09:36:20.360623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:34.519 [2024-11-05 09:36:20.389368] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:34.519 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:34.519 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:34.519 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:34.519 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:34.519 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:34.779 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.779 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Tb7u5KQtIo 00:13:34.779 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tb7u5KQtIo 00:13:34.779 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:35.037 [2024-11-05 09:36:20.759258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:35.037 09:36:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:35.295 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:35.557 [2024-11-05 09:36:21.287339] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:35.557 [2024-11-05 09:36:21.287574] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:35.557 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:35.816 malloc0 00:13:35.816 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:36.075 09:36:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:36.333 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=71822 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 71822 /var/tmp/bdevperf.sock 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71822 ']' 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:36.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:36.592 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:36.592 [2024-11-05 09:36:22.444063] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:36.592 [2024-11-05 09:36:22.444200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71822 ] 00:13:36.850 [2024-11-05 09:36:22.588614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.850 [2024-11-05 09:36:22.622506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:36.850 [2024-11-05 09:36:22.652687] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:36.850 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:36.850 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:36.850 09:36:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:37.109 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:13:37.676 [2024-11-05 09:36:23.331463] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:37.676 TLSTESTn1 00:13:37.676 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:13:37.935 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:13:37.935 "subsystems": [ 00:13:37.935 { 00:13:37.935 "subsystem": "keyring", 00:13:37.935 "config": [ 00:13:37.935 { 00:13:37.935 "method": "keyring_file_add_key", 00:13:37.935 "params": { 00:13:37.935 "name": "key0", 00:13:37.935 "path": "/tmp/tmp.Tb7u5KQtIo" 00:13:37.935 } 00:13:37.935 } 00:13:37.935 ] 00:13:37.935 }, 00:13:37.935 { 00:13:37.935 "subsystem": "iobuf", 00:13:37.935 "config": [ 00:13:37.935 { 00:13:37.935 "method": "iobuf_set_options", 00:13:37.935 "params": { 00:13:37.935 "small_pool_count": 8192, 00:13:37.935 "large_pool_count": 1024, 00:13:37.935 "small_bufsize": 8192, 00:13:37.935 "large_bufsize": 135168, 00:13:37.935 "enable_numa": false 00:13:37.935 } 00:13:37.935 } 00:13:37.935 ] 00:13:37.935 }, 00:13:37.935 { 00:13:37.935 "subsystem": "sock", 00:13:37.935 "config": [ 00:13:37.935 { 00:13:37.935 "method": "sock_set_default_impl", 00:13:37.935 "params": { 00:13:37.935 "impl_name": "uring" 00:13:37.935 } 00:13:37.935 }, 00:13:37.935 { 00:13:37.935 "method": "sock_impl_set_options", 00:13:37.935 "params": { 00:13:37.935 "impl_name": "ssl", 00:13:37.935 "recv_buf_size": 4096, 00:13:37.935 "send_buf_size": 4096, 00:13:37.935 "enable_recv_pipe": true, 00:13:37.935 "enable_quickack": false, 00:13:37.935 "enable_placement_id": 0, 00:13:37.935 "enable_zerocopy_send_server": true, 00:13:37.935 "enable_zerocopy_send_client": false, 00:13:37.935 "zerocopy_threshold": 0, 00:13:37.935 "tls_version": 0, 00:13:37.935 "enable_ktls": false 00:13:37.935 } 00:13:37.935 }, 00:13:37.935 { 00:13:37.935 "method": "sock_impl_set_options", 00:13:37.935 "params": { 00:13:37.935 "impl_name": "posix", 00:13:37.935 "recv_buf_size": 2097152, 00:13:37.935 "send_buf_size": 2097152, 00:13:37.935 "enable_recv_pipe": true, 00:13:37.935 "enable_quickack": false, 00:13:37.935 "enable_placement_id": 0, 00:13:37.935 "enable_zerocopy_send_server": true, 00:13:37.935 "enable_zerocopy_send_client": false, 00:13:37.935 "zerocopy_threshold": 0, 00:13:37.935 "tls_version": 0, 00:13:37.935 "enable_ktls": false 00:13:37.935 } 00:13:37.935 }, 00:13:37.935 { 00:13:37.935 "method": "sock_impl_set_options", 00:13:37.935 "params": { 00:13:37.935 "impl_name": "uring", 00:13:37.935 "recv_buf_size": 2097152, 00:13:37.935 "send_buf_size": 2097152, 00:13:37.935 "enable_recv_pipe": true, 00:13:37.935 "enable_quickack": false, 00:13:37.935 "enable_placement_id": 0, 00:13:37.935 "enable_zerocopy_send_server": false, 00:13:37.935 "enable_zerocopy_send_client": false, 00:13:37.935 "zerocopy_threshold": 0, 00:13:37.935 "tls_version": 0, 00:13:37.935 "enable_ktls": false 00:13:37.935 } 00:13:37.935 } 00:13:37.935 ] 00:13:37.935 }, 00:13:37.935 { 00:13:37.935 "subsystem": "vmd", 00:13:37.935 "config": [] 00:13:37.935 }, 00:13:37.935 { 00:13:37.935 "subsystem": "accel", 00:13:37.935 "config": [ 00:13:37.935 { 00:13:37.935 "method": "accel_set_options", 00:13:37.935 "params": { 00:13:37.936 "small_cache_size": 128, 00:13:37.936 "large_cache_size": 16, 00:13:37.936 "task_count": 2048, 00:13:37.936 "sequence_count": 2048, 00:13:37.936 "buf_count": 2048 00:13:37.936 } 00:13:37.936 } 00:13:37.936 ] 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "subsystem": "bdev", 00:13:37.936 "config": [ 00:13:37.936 { 00:13:37.936 "method": "bdev_set_options", 00:13:37.936 "params": { 00:13:37.936 "bdev_io_pool_size": 65535, 00:13:37.936 "bdev_io_cache_size": 256, 00:13:37.936 "bdev_auto_examine": true, 00:13:37.936 "iobuf_small_cache_size": 128, 00:13:37.936 "iobuf_large_cache_size": 16 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "bdev_raid_set_options", 00:13:37.936 "params": { 00:13:37.936 "process_window_size_kb": 1024, 00:13:37.936 "process_max_bandwidth_mb_sec": 0 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "bdev_iscsi_set_options", 00:13:37.936 "params": { 00:13:37.936 "timeout_sec": 30 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "bdev_nvme_set_options", 00:13:37.936 "params": { 00:13:37.936 "action_on_timeout": "none", 00:13:37.936 "timeout_us": 0, 00:13:37.936 "timeout_admin_us": 0, 00:13:37.936 "keep_alive_timeout_ms": 10000, 00:13:37.936 "arbitration_burst": 0, 00:13:37.936 "low_priority_weight": 0, 00:13:37.936 "medium_priority_weight": 0, 00:13:37.936 "high_priority_weight": 0, 00:13:37.936 "nvme_adminq_poll_period_us": 10000, 00:13:37.936 "nvme_ioq_poll_period_us": 0, 00:13:37.936 "io_queue_requests": 0, 00:13:37.936 "delay_cmd_submit": true, 00:13:37.936 "transport_retry_count": 4, 00:13:37.936 "bdev_retry_count": 3, 00:13:37.936 "transport_ack_timeout": 0, 00:13:37.936 "ctrlr_loss_timeout_sec": 0, 00:13:37.936 "reconnect_delay_sec": 0, 00:13:37.936 "fast_io_fail_timeout_sec": 0, 00:13:37.936 "disable_auto_failback": false, 00:13:37.936 "generate_uuids": false, 00:13:37.936 "transport_tos": 0, 00:13:37.936 "nvme_error_stat": false, 00:13:37.936 "rdma_srq_size": 0, 00:13:37.936 "io_path_stat": false, 00:13:37.936 "allow_accel_sequence": false, 00:13:37.936 "rdma_max_cq_size": 0, 00:13:37.936 "rdma_cm_event_timeout_ms": 0, 00:13:37.936 "dhchap_digests": [ 00:13:37.936 "sha256", 00:13:37.936 "sha384", 00:13:37.936 "sha512" 00:13:37.936 ], 00:13:37.936 "dhchap_dhgroups": [ 00:13:37.936 "null", 00:13:37.936 "ffdhe2048", 00:13:37.936 "ffdhe3072", 00:13:37.936 "ffdhe4096", 00:13:37.936 "ffdhe6144", 00:13:37.936 "ffdhe8192" 00:13:37.936 ] 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "bdev_nvme_set_hotplug", 00:13:37.936 "params": { 00:13:37.936 "period_us": 100000, 00:13:37.936 "enable": false 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "bdev_malloc_create", 00:13:37.936 "params": { 00:13:37.936 "name": "malloc0", 00:13:37.936 "num_blocks": 8192, 00:13:37.936 "block_size": 4096, 00:13:37.936 "physical_block_size": 4096, 00:13:37.936 "uuid": "34432e40-f61a-4f59-b7d4-c9df05b29df1", 00:13:37.936 "optimal_io_boundary": 0, 00:13:37.936 "md_size": 0, 00:13:37.936 "dif_type": 0, 00:13:37.936 "dif_is_head_of_md": false, 00:13:37.936 "dif_pi_format": 0 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "bdev_wait_for_examine" 00:13:37.936 } 00:13:37.936 ] 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "subsystem": "nbd", 00:13:37.936 "config": [] 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "subsystem": "scheduler", 00:13:37.936 "config": [ 00:13:37.936 { 00:13:37.936 "method": "framework_set_scheduler", 00:13:37.936 "params": { 00:13:37.936 "name": "static" 00:13:37.936 } 00:13:37.936 } 00:13:37.936 ] 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "subsystem": "nvmf", 00:13:37.936 "config": [ 00:13:37.936 { 00:13:37.936 "method": "nvmf_set_config", 00:13:37.936 "params": { 00:13:37.936 "discovery_filter": "match_any", 00:13:37.936 "admin_cmd_passthru": { 00:13:37.936 "identify_ctrlr": false 00:13:37.936 }, 00:13:37.936 "dhchap_digests": [ 00:13:37.936 "sha256", 00:13:37.936 "sha384", 00:13:37.936 "sha512" 00:13:37.936 ], 00:13:37.936 "dhchap_dhgroups": [ 00:13:37.936 "null", 00:13:37.936 "ffdhe2048", 00:13:37.936 "ffdhe3072", 00:13:37.936 "ffdhe4096", 00:13:37.936 "ffdhe6144", 00:13:37.936 "ffdhe8192" 00:13:37.936 ] 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "nvmf_set_max_subsystems", 00:13:37.936 "params": { 00:13:37.936 "max_subsystems": 1024 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "nvmf_set_crdt", 00:13:37.936 "params": { 00:13:37.936 "crdt1": 0, 00:13:37.936 "crdt2": 0, 00:13:37.936 "crdt3": 0 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "nvmf_create_transport", 00:13:37.936 "params": { 00:13:37.936 "trtype": "TCP", 00:13:37.936 "max_queue_depth": 128, 00:13:37.936 "max_io_qpairs_per_ctrlr": 127, 00:13:37.936 "in_capsule_data_size": 4096, 00:13:37.936 "max_io_size": 131072, 00:13:37.936 "io_unit_size": 131072, 00:13:37.936 "max_aq_depth": 128, 00:13:37.936 "num_shared_buffers": 511, 00:13:37.936 "buf_cache_size": 4294967295, 00:13:37.936 "dif_insert_or_strip": false, 00:13:37.936 "zcopy": false, 00:13:37.936 "c2h_success": false, 00:13:37.936 "sock_priority": 0, 00:13:37.936 "abort_timeout_sec": 1, 00:13:37.936 "ack_timeout": 0, 00:13:37.936 "data_wr_pool_size": 0 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "nvmf_create_subsystem", 00:13:37.936 "params": { 00:13:37.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.936 "allow_any_host": false, 00:13:37.936 "serial_number": "SPDK00000000000001", 00:13:37.936 "model_number": "SPDK bdev Controller", 00:13:37.936 "max_namespaces": 10, 00:13:37.936 "min_cntlid": 1, 00:13:37.936 "max_cntlid": 65519, 00:13:37.936 "ana_reporting": false 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "nvmf_subsystem_add_host", 00:13:37.936 "params": { 00:13:37.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.936 "host": "nqn.2016-06.io.spdk:host1", 00:13:37.936 "psk": "key0" 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "nvmf_subsystem_add_ns", 00:13:37.936 "params": { 00:13:37.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.936 "namespace": { 00:13:37.936 "nsid": 1, 00:13:37.936 "bdev_name": "malloc0", 00:13:37.936 "nguid": "34432E40F61A4F59B7D4C9DF05B29DF1", 00:13:37.936 "uuid": "34432e40-f61a-4f59-b7d4-c9df05b29df1", 00:13:37.936 "no_auto_visible": false 00:13:37.936 } 00:13:37.936 } 00:13:37.936 }, 00:13:37.936 { 00:13:37.936 "method": "nvmf_subsystem_add_listener", 00:13:37.936 "params": { 00:13:37.936 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:37.936 "listen_address": { 00:13:37.936 "trtype": "TCP", 00:13:37.936 "adrfam": "IPv4", 00:13:37.936 "traddr": "10.0.0.3", 00:13:37.936 "trsvcid": "4420" 00:13:37.936 }, 00:13:37.936 "secure_channel": true 00:13:37.936 } 00:13:37.936 } 00:13:37.936 ] 00:13:37.936 } 00:13:37.936 ] 00:13:37.936 }' 00:13:37.936 09:36:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:13:38.195 "subsystems": [ 00:13:38.195 { 00:13:38.195 "subsystem": "keyring", 00:13:38.195 "config": [ 00:13:38.195 { 00:13:38.195 "method": "keyring_file_add_key", 00:13:38.195 "params": { 00:13:38.195 "name": "key0", 00:13:38.195 "path": "/tmp/tmp.Tb7u5KQtIo" 00:13:38.195 } 00:13:38.195 } 00:13:38.195 ] 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "subsystem": "iobuf", 00:13:38.195 "config": [ 00:13:38.195 { 00:13:38.195 "method": "iobuf_set_options", 00:13:38.195 "params": { 00:13:38.195 "small_pool_count": 8192, 00:13:38.195 "large_pool_count": 1024, 00:13:38.195 "small_bufsize": 8192, 00:13:38.195 "large_bufsize": 135168, 00:13:38.195 "enable_numa": false 00:13:38.195 } 00:13:38.195 } 00:13:38.195 ] 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "subsystem": "sock", 00:13:38.195 "config": [ 00:13:38.195 { 00:13:38.195 "method": "sock_set_default_impl", 00:13:38.195 "params": { 00:13:38.195 "impl_name": "uring" 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "sock_impl_set_options", 00:13:38.195 "params": { 00:13:38.195 "impl_name": "ssl", 00:13:38.195 "recv_buf_size": 4096, 00:13:38.195 "send_buf_size": 4096, 00:13:38.195 "enable_recv_pipe": true, 00:13:38.195 "enable_quickack": false, 00:13:38.195 "enable_placement_id": 0, 00:13:38.195 "enable_zerocopy_send_server": true, 00:13:38.195 "enable_zerocopy_send_client": false, 00:13:38.195 "zerocopy_threshold": 0, 00:13:38.195 "tls_version": 0, 00:13:38.195 "enable_ktls": false 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "sock_impl_set_options", 00:13:38.195 "params": { 00:13:38.195 "impl_name": "posix", 00:13:38.195 "recv_buf_size": 2097152, 00:13:38.195 "send_buf_size": 2097152, 00:13:38.195 "enable_recv_pipe": true, 00:13:38.195 "enable_quickack": false, 00:13:38.195 "enable_placement_id": 0, 00:13:38.195 "enable_zerocopy_send_server": true, 00:13:38.195 "enable_zerocopy_send_client": false, 00:13:38.195 "zerocopy_threshold": 0, 00:13:38.195 "tls_version": 0, 00:13:38.195 "enable_ktls": false 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "sock_impl_set_options", 00:13:38.195 "params": { 00:13:38.195 "impl_name": "uring", 00:13:38.195 "recv_buf_size": 2097152, 00:13:38.195 "send_buf_size": 2097152, 00:13:38.195 "enable_recv_pipe": true, 00:13:38.195 "enable_quickack": false, 00:13:38.195 "enable_placement_id": 0, 00:13:38.195 "enable_zerocopy_send_server": false, 00:13:38.195 "enable_zerocopy_send_client": false, 00:13:38.195 "zerocopy_threshold": 0, 00:13:38.195 "tls_version": 0, 00:13:38.195 "enable_ktls": false 00:13:38.195 } 00:13:38.195 } 00:13:38.195 ] 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "subsystem": "vmd", 00:13:38.195 "config": [] 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "subsystem": "accel", 00:13:38.195 "config": [ 00:13:38.195 { 00:13:38.195 "method": "accel_set_options", 00:13:38.195 "params": { 00:13:38.195 "small_cache_size": 128, 00:13:38.195 "large_cache_size": 16, 00:13:38.195 "task_count": 2048, 00:13:38.195 "sequence_count": 2048, 00:13:38.195 "buf_count": 2048 00:13:38.195 } 00:13:38.195 } 00:13:38.195 ] 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "subsystem": "bdev", 00:13:38.195 "config": [ 00:13:38.195 { 00:13:38.195 "method": "bdev_set_options", 00:13:38.195 "params": { 00:13:38.195 "bdev_io_pool_size": 65535, 00:13:38.195 "bdev_io_cache_size": 256, 00:13:38.195 "bdev_auto_examine": true, 00:13:38.195 "iobuf_small_cache_size": 128, 00:13:38.195 "iobuf_large_cache_size": 16 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "bdev_raid_set_options", 00:13:38.195 "params": { 00:13:38.195 "process_window_size_kb": 1024, 00:13:38.195 "process_max_bandwidth_mb_sec": 0 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "bdev_iscsi_set_options", 00:13:38.195 "params": { 00:13:38.195 "timeout_sec": 30 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "bdev_nvme_set_options", 00:13:38.195 "params": { 00:13:38.195 "action_on_timeout": "none", 00:13:38.195 "timeout_us": 0, 00:13:38.195 "timeout_admin_us": 0, 00:13:38.195 "keep_alive_timeout_ms": 10000, 00:13:38.195 "arbitration_burst": 0, 00:13:38.195 "low_priority_weight": 0, 00:13:38.195 "medium_priority_weight": 0, 00:13:38.195 "high_priority_weight": 0, 00:13:38.195 "nvme_adminq_poll_period_us": 10000, 00:13:38.195 "nvme_ioq_poll_period_us": 0, 00:13:38.195 "io_queue_requests": 512, 00:13:38.195 "delay_cmd_submit": true, 00:13:38.195 "transport_retry_count": 4, 00:13:38.195 "bdev_retry_count": 3, 00:13:38.195 "transport_ack_timeout": 0, 00:13:38.195 "ctrlr_loss_timeout_sec": 0, 00:13:38.195 "reconnect_delay_sec": 0, 00:13:38.195 "fast_io_fail_timeout_sec": 0, 00:13:38.195 "disable_auto_failback": false, 00:13:38.195 "generate_uuids": false, 00:13:38.195 "transport_tos": 0, 00:13:38.195 "nvme_error_stat": false, 00:13:38.195 "rdma_srq_size": 0, 00:13:38.195 "io_path_stat": false, 00:13:38.195 "allow_accel_sequence": false, 00:13:38.195 "rdma_max_cq_size": 0, 00:13:38.195 "rdma_cm_event_timeout_ms": 0, 00:13:38.195 "dhchap_digests": [ 00:13:38.195 "sha256", 00:13:38.195 "sha384", 00:13:38.195 "sha512" 00:13:38.195 ], 00:13:38.195 "dhchap_dhgroups": [ 00:13:38.195 "null", 00:13:38.195 "ffdhe2048", 00:13:38.195 "ffdhe3072", 00:13:38.195 "ffdhe4096", 00:13:38.195 "ffdhe6144", 00:13:38.195 "ffdhe8192" 00:13:38.195 ] 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "bdev_nvme_attach_controller", 00:13:38.195 "params": { 00:13:38.195 "name": "TLSTEST", 00:13:38.195 "trtype": "TCP", 00:13:38.195 "adrfam": "IPv4", 00:13:38.195 "traddr": "10.0.0.3", 00:13:38.195 "trsvcid": "4420", 00:13:38.195 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.195 "prchk_reftag": false, 00:13:38.195 "prchk_guard": false, 00:13:38.195 "ctrlr_loss_timeout_sec": 0, 00:13:38.195 "reconnect_delay_sec": 0, 00:13:38.195 "fast_io_fail_timeout_sec": 0, 00:13:38.195 "psk": "key0", 00:13:38.195 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:38.195 "hdgst": false, 00:13:38.195 "ddgst": false, 00:13:38.195 "multipath": "multipath" 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "bdev_nvme_set_hotplug", 00:13:38.195 "params": { 00:13:38.195 "period_us": 100000, 00:13:38.195 "enable": false 00:13:38.195 } 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "method": "bdev_wait_for_examine" 00:13:38.195 } 00:13:38.195 ] 00:13:38.195 }, 00:13:38.195 { 00:13:38.195 "subsystem": "nbd", 00:13:38.195 "config": [] 00:13:38.195 } 00:13:38.195 ] 00:13:38.195 }' 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 71822 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71822 ']' 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71822 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71822 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:38.195 killing process with pid 71822 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71822' 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71822 00:13:38.195 Received shutdown signal, test time was about 10.000000 seconds 00:13:38.195 00:13:38.195 Latency(us) 00:13:38.195 [2024-11-05T09:36:24.153Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:38.195 [2024-11-05T09:36:24.153Z] =================================================================================================================== 00:13:38.195 [2024-11-05T09:36:24.153Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:38.195 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71822 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 71774 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71774 ']' 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71774 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71774 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:38.454 killing process with pid 71774 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71774' 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71774 00:13:38.454 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71774 00:13:38.713 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:13:38.713 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:38.713 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:38.713 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.713 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:13:38.713 "subsystems": [ 00:13:38.713 { 00:13:38.713 "subsystem": "keyring", 00:13:38.713 "config": [ 00:13:38.713 { 00:13:38.713 "method": "keyring_file_add_key", 00:13:38.713 "params": { 00:13:38.713 "name": "key0", 00:13:38.713 "path": "/tmp/tmp.Tb7u5KQtIo" 00:13:38.713 } 00:13:38.713 } 00:13:38.713 ] 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "subsystem": "iobuf", 00:13:38.713 "config": [ 00:13:38.713 { 00:13:38.713 "method": "iobuf_set_options", 00:13:38.713 "params": { 00:13:38.713 "small_pool_count": 8192, 00:13:38.713 "large_pool_count": 1024, 00:13:38.713 "small_bufsize": 8192, 00:13:38.713 "large_bufsize": 135168, 00:13:38.713 "enable_numa": false 00:13:38.713 } 00:13:38.713 } 00:13:38.713 ] 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "subsystem": "sock", 00:13:38.713 "config": [ 00:13:38.713 { 00:13:38.713 "method": "sock_set_default_impl", 00:13:38.713 "params": { 00:13:38.713 "impl_name": "uring" 00:13:38.713 } 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "method": "sock_impl_set_options", 00:13:38.713 "params": { 00:13:38.713 "impl_name": "ssl", 00:13:38.713 "recv_buf_size": 4096, 00:13:38.713 "send_buf_size": 4096, 00:13:38.713 "enable_recv_pipe": true, 00:13:38.713 "enable_quickack": false, 00:13:38.713 "enable_placement_id": 0, 00:13:38.713 "enable_zerocopy_send_server": true, 00:13:38.713 "enable_zerocopy_send_client": false, 00:13:38.713 "zerocopy_threshold": 0, 00:13:38.713 "tls_version": 0, 00:13:38.713 "enable_ktls": false 00:13:38.713 } 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "method": "sock_impl_set_options", 00:13:38.713 "params": { 00:13:38.713 "impl_name": "posix", 00:13:38.713 "recv_buf_size": 2097152, 00:13:38.713 "send_buf_size": 2097152, 00:13:38.713 "enable_recv_pipe": true, 00:13:38.713 "enable_quickack": false, 00:13:38.713 "enable_placement_id": 0, 00:13:38.713 "enable_zerocopy_send_server": true, 00:13:38.713 "enable_zerocopy_send_client": false, 00:13:38.713 "zerocopy_threshold": 0, 00:13:38.713 "tls_version": 0, 00:13:38.713 "enable_ktls": false 00:13:38.713 } 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "method": "sock_impl_set_options", 00:13:38.713 "params": { 00:13:38.713 "impl_name": "uring", 00:13:38.713 "recv_buf_size": 2097152, 00:13:38.713 "send_buf_size": 2097152, 00:13:38.713 "enable_recv_pipe": true, 00:13:38.713 "enable_quickack": false, 00:13:38.713 "enable_placement_id": 0, 00:13:38.713 "enable_zerocopy_send_server": false, 00:13:38.713 "enable_zerocopy_send_client": false, 00:13:38.713 "zerocopy_threshold": 0, 00:13:38.713 "tls_version": 0, 00:13:38.713 "enable_ktls": false 00:13:38.713 } 00:13:38.713 } 00:13:38.713 ] 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "subsystem": "vmd", 00:13:38.713 "config": [] 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "subsystem": "accel", 00:13:38.713 "config": [ 00:13:38.713 { 00:13:38.713 "method": "accel_set_options", 00:13:38.713 "params": { 00:13:38.713 "small_cache_size": 128, 00:13:38.713 "large_cache_size": 16, 00:13:38.713 "task_count": 2048, 00:13:38.713 "sequence_count": 2048, 00:13:38.713 "buf_count": 2048 00:13:38.713 } 00:13:38.713 } 00:13:38.713 ] 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "subsystem": "bdev", 00:13:38.713 "config": [ 00:13:38.713 { 00:13:38.713 "method": "bdev_set_options", 00:13:38.713 "params": { 00:13:38.713 "bdev_io_pool_size": 65535, 00:13:38.713 "bdev_io_cache_size": 256, 00:13:38.713 "bdev_auto_examine": true, 00:13:38.713 "iobuf_small_cache_size": 128, 00:13:38.713 "iobuf_large_cache_size": 16 00:13:38.713 } 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "method": "bdev_raid_set_options", 00:13:38.713 "params": { 00:13:38.713 "process_window_size_kb": 1024, 00:13:38.713 "process_max_bandwidth_mb_sec": 0 00:13:38.713 } 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "method": "bdev_iscsi_set_options", 00:13:38.713 "params": { 00:13:38.713 "timeout_sec": 30 00:13:38.713 } 00:13:38.713 }, 00:13:38.713 { 00:13:38.713 "method": "bdev_nvme_set_options", 00:13:38.713 "params": { 00:13:38.713 "action_on_timeout": "none", 00:13:38.713 "timeout_us": 0, 00:13:38.713 "timeout_admin_us": 0, 00:13:38.713 "keep_alive_timeout_ms": 10000, 00:13:38.713 "arbitration_burst": 0, 00:13:38.713 "low_priority_weight": 0, 00:13:38.713 "medium_priority_weight": 0, 00:13:38.713 "high_priority_weight": 0, 00:13:38.713 "nvme_adminq_poll_period_us": 10000, 00:13:38.713 "nvme_ioq_poll_period_us": 0, 00:13:38.713 "io_queue_requests": 0, 00:13:38.713 "delay_cmd_submit": true, 00:13:38.713 "transport_retry_count": 4, 00:13:38.713 "bdev_retry_count": 3, 00:13:38.714 "transport_ack_timeout": 0, 00:13:38.714 "ctrlr_loss_timeout_sec": 0, 00:13:38.714 "reconnect_delay_sec": 0, 00:13:38.714 "fast_io_fail_timeout_sec": 0, 00:13:38.714 "disable_auto_failback": false, 00:13:38.714 "generate_uuids": false, 00:13:38.714 "transport_tos": 0, 00:13:38.714 "nvme_error_stat": false, 00:13:38.714 "rdma_srq_size": 0, 00:13:38.714 "io_path_stat": false, 00:13:38.714 "allow_accel_sequence": false, 00:13:38.714 "rdma_max_cq_size": 0, 00:13:38.714 "rdma_cm_event_timeout_ms": 0, 00:13:38.714 "dhchap_digests": [ 00:13:38.714 "sha256", 00:13:38.714 "sha384", 00:13:38.714 "sha512" 00:13:38.714 ], 00:13:38.714 "dhchap_dhgroups": [ 00:13:38.714 "null", 00:13:38.714 "ffdhe2048", 00:13:38.714 "ffdhe3072", 00:13:38.714 "ffdhe4096", 00:13:38.714 "ffdhe6144", 00:13:38.714 "ffdhe8192" 00:13:38.714 ] 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "bdev_nvme_set_hotplug", 00:13:38.714 "params": { 00:13:38.714 "period_us": 100000, 00:13:38.714 "enable": false 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "bdev_malloc_create", 00:13:38.714 "params": { 00:13:38.714 "name": "malloc0", 00:13:38.714 "num_blocks": 8192, 00:13:38.714 "block_size": 4096, 00:13:38.714 "physical_block_size": 4096, 00:13:38.714 "uuid": "34432e40-f61a-4f59-b7d4-c9df05b29df1", 00:13:38.714 "optimal_io_boundary": 0, 00:13:38.714 "md_size": 0, 00:13:38.714 "dif_type": 0, 00:13:38.714 "dif_is_head_of_md": false, 00:13:38.714 "dif_pi_format": 0 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "bdev_wait_for_examine" 00:13:38.714 } 00:13:38.714 ] 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "subsystem": "nbd", 00:13:38.714 "config": [] 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "subsystem": "scheduler", 00:13:38.714 "config": [ 00:13:38.714 { 00:13:38.714 "method": "framework_set_scheduler", 00:13:38.714 "params": { 00:13:38.714 "name": "static" 00:13:38.714 } 00:13:38.714 } 00:13:38.714 ] 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "subsystem": "nvmf", 00:13:38.714 "config": [ 00:13:38.714 { 00:13:38.714 "method": "nvmf_set_config", 00:13:38.714 "params": { 00:13:38.714 "discovery_filter": "match_any", 00:13:38.714 "admin_cmd_passthru": { 00:13:38.714 "identify_ctrlr": false 00:13:38.714 }, 00:13:38.714 "dhchap_digests": [ 00:13:38.714 "sha256", 00:13:38.714 "sha384", 00:13:38.714 "sha512" 00:13:38.714 ], 00:13:38.714 "dhchap_dhgroups": [ 00:13:38.714 "null", 00:13:38.714 "ffdhe2048", 00:13:38.714 "ffdhe3072", 00:13:38.714 "ffdhe4096", 00:13:38.714 "ffdhe6144", 00:13:38.714 "ffdhe8192" 00:13:38.714 ] 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "nvmf_set_max_subsystems", 00:13:38.714 "params": { 00:13:38.714 "max_subsystems": 1024 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "nvmf_set_crdt", 00:13:38.714 "params": { 00:13:38.714 "crdt1": 0, 00:13:38.714 "crdt2": 0, 00:13:38.714 "crdt3": 0 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "nvmf_create_transport", 00:13:38.714 "params": { 00:13:38.714 "trtype": "TCP", 00:13:38.714 "max_queue_depth": 128, 00:13:38.714 "max_io_qpairs_per_ctrlr": 127, 00:13:38.714 "in_capsule_data_size": 4096, 00:13:38.714 "max_io_size": 131072, 00:13:38.714 "io_unit_size": 131072, 00:13:38.714 "max_aq_depth": 128, 00:13:38.714 "num_shared_buffers": 511, 00:13:38.714 "buf_cache_size": 4294967295, 00:13:38.714 "dif_insert_or_strip": false, 00:13:38.714 "zcopy": false, 00:13:38.714 "c2h_success": false, 00:13:38.714 "sock_priority": 0, 00:13:38.714 "abort_timeout_sec": 1, 00:13:38.714 "ack_timeout": 0, 00:13:38.714 "data_wr_pool_size": 0 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "nvmf_create_subsystem", 00:13:38.714 "params": { 00:13:38.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.714 "allow_any_host": false, 00:13:38.714 "serial_number": "SPDK00000000000001", 00:13:38.714 "model_number": "SPDK bdev Controller", 00:13:38.714 "max_namespaces": 10, 00:13:38.714 "min_cntlid": 1, 00:13:38.714 "max_cntlid": 65519, 00:13:38.714 "ana_reporting": false 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "nvmf_subsystem_add_host", 00:13:38.714 "params": { 00:13:38.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.714 "host": "nqn.2016-06.io.spdk:host1", 00:13:38.714 "psk": "key0" 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "nvmf_subsystem_add_ns", 00:13:38.714 "params": { 00:13:38.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.714 "namespace": { 00:13:38.714 "nsid": 1, 00:13:38.714 "bdev_name": "malloc0", 00:13:38.714 "nguid": "34432E40F61A4F59B7D4C9DF05B29DF1", 00:13:38.714 "uuid": "34432e40-f61a-4f59-b7d4-c9df05b29df1", 00:13:38.714 "no_auto_visible": false 00:13:38.714 } 00:13:38.714 } 00:13:38.714 }, 00:13:38.714 { 00:13:38.714 "method": "nvmf_subsystem_add_listener", 00:13:38.714 "params": { 00:13:38.714 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:38.714 "listen_address": { 00:13:38.714 "trtype": "TCP", 00:13:38.714 "adrfam": "IPv4", 00:13:38.714 "traddr": "10.0.0.3", 00:13:38.714 "trsvcid": "4420" 00:13:38.714 }, 00:13:38.714 "secure_channel": true 00:13:38.714 } 00:13:38.714 } 00:13:38.714 ] 00:13:38.714 } 00:13:38.714 ] 00:13:38.714 }' 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=71864 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 71864 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71864 ']' 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:38.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:38.714 09:36:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:38.714 [2024-11-05 09:36:24.486123] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:38.714 [2024-11-05 09:36:24.486232] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:38.714 [2024-11-05 09:36:24.626579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:38.714 [2024-11-05 09:36:24.658248] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:38.714 [2024-11-05 09:36:24.658306] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:38.714 [2024-11-05 09:36:24.658318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:38.714 [2024-11-05 09:36:24.658327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:38.714 [2024-11-05 09:36:24.658334] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:38.714 [2024-11-05 09:36:24.658727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:38.973 [2024-11-05 09:36:24.800319] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:38.973 [2024-11-05 09:36:24.856258] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:38.973 [2024-11-05 09:36:24.888202] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:38.973 [2024-11-05 09:36:24.888468] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:39.909 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:39.909 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:39.909 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=71896 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 71896 /var/tmp/bdevperf.sock 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 71896 ']' 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:39.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:13:39.910 09:36:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:13:39.910 "subsystems": [ 00:13:39.910 { 00:13:39.910 "subsystem": "keyring", 00:13:39.910 "config": [ 00:13:39.910 { 00:13:39.910 "method": "keyring_file_add_key", 00:13:39.910 "params": { 00:13:39.910 "name": "key0", 00:13:39.910 "path": "/tmp/tmp.Tb7u5KQtIo" 00:13:39.910 } 00:13:39.910 } 00:13:39.910 ] 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "subsystem": "iobuf", 00:13:39.910 "config": [ 00:13:39.910 { 00:13:39.910 "method": "iobuf_set_options", 00:13:39.910 "params": { 00:13:39.910 "small_pool_count": 8192, 00:13:39.910 "large_pool_count": 1024, 00:13:39.910 "small_bufsize": 8192, 00:13:39.910 "large_bufsize": 135168, 00:13:39.910 "enable_numa": false 00:13:39.910 } 00:13:39.910 } 00:13:39.910 ] 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "subsystem": "sock", 00:13:39.910 "config": [ 00:13:39.910 { 00:13:39.910 "method": "sock_set_default_impl", 00:13:39.910 "params": { 00:13:39.910 "impl_name": "uring" 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "sock_impl_set_options", 00:13:39.910 "params": { 00:13:39.910 "impl_name": "ssl", 00:13:39.910 "recv_buf_size": 4096, 00:13:39.910 "send_buf_size": 4096, 00:13:39.910 "enable_recv_pipe": true, 00:13:39.910 "enable_quickack": false, 00:13:39.910 "enable_placement_id": 0, 00:13:39.910 "enable_zerocopy_send_server": true, 00:13:39.910 "enable_zerocopy_send_client": false, 00:13:39.910 "zerocopy_threshold": 0, 00:13:39.910 "tls_version": 0, 00:13:39.910 "enable_ktls": false 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "sock_impl_set_options", 00:13:39.910 "params": { 00:13:39.910 "impl_name": "posix", 00:13:39.910 "recv_buf_size": 2097152, 00:13:39.910 "send_buf_size": 2097152, 00:13:39.910 "enable_recv_pipe": true, 00:13:39.910 "enable_quickack": false, 00:13:39.910 "enable_placement_id": 0, 00:13:39.910 "enable_zerocopy_send_server": true, 00:13:39.910 "enable_zerocopy_send_client": false, 00:13:39.910 "zerocopy_threshold": 0, 00:13:39.910 "tls_version": 0, 00:13:39.910 "enable_ktls": false 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "sock_impl_set_options", 00:13:39.910 "params": { 00:13:39.910 "impl_name": "uring", 00:13:39.910 "recv_buf_size": 2097152, 00:13:39.910 "send_buf_size": 2097152, 00:13:39.910 "enable_recv_pipe": true, 00:13:39.910 "enable_quickack": false, 00:13:39.910 "enable_placement_id": 0, 00:13:39.910 "enable_zerocopy_send_server": false, 00:13:39.910 "enable_zerocopy_send_client": false, 00:13:39.910 "zerocopy_threshold": 0, 00:13:39.910 "tls_version": 0, 00:13:39.910 "enable_ktls": false 00:13:39.910 } 00:13:39.910 } 00:13:39.910 ] 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "subsystem": "vmd", 00:13:39.910 "config": [] 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "subsystem": "accel", 00:13:39.910 "config": [ 00:13:39.910 { 00:13:39.910 "method": "accel_set_options", 00:13:39.910 "params": { 00:13:39.910 "small_cache_size": 128, 00:13:39.910 "large_cache_size": 16, 00:13:39.910 "task_count": 2048, 00:13:39.910 "sequence_count": 2048, 00:13:39.910 "buf_count": 2048 00:13:39.910 } 00:13:39.910 } 00:13:39.910 ] 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "subsystem": "bdev", 00:13:39.910 "config": [ 00:13:39.910 { 00:13:39.910 "method": "bdev_set_options", 00:13:39.910 "params": { 00:13:39.910 "bdev_io_pool_size": 65535, 00:13:39.910 "bdev_io_cache_size": 256, 00:13:39.910 "bdev_auto_examine": true, 00:13:39.910 "iobuf_small_cache_size": 128, 00:13:39.910 "iobuf_large_cache_size": 16 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "bdev_raid_set_options", 00:13:39.910 "params": { 00:13:39.910 "process_window_size_kb": 1024, 00:13:39.910 "process_max_bandwidth_mb_sec": 0 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "bdev_iscsi_set_options", 00:13:39.910 "params": { 00:13:39.910 "timeout_sec": 30 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "bdev_nvme_set_options", 00:13:39.910 "params": { 00:13:39.910 "action_on_timeout": "none", 00:13:39.910 "timeout_us": 0, 00:13:39.910 "timeout_admin_us": 0, 00:13:39.910 "keep_alive_timeout_ms": 10000, 00:13:39.910 "arbitration_burst": 0, 00:13:39.910 "low_priority_weight": 0, 00:13:39.910 "medium_priority_weight": 0, 00:13:39.910 "high_priority_weight": 0, 00:13:39.910 "nvme_adminq_poll_period_us": 10000, 00:13:39.910 "nvme_ioq_poll_period_us": 0, 00:13:39.910 "io_queue_requests": 512, 00:13:39.910 "delay_cmd_submit": true, 00:13:39.910 "transport_retry_count": 4, 00:13:39.910 "bdev_retry_count": 3, 00:13:39.910 "transport_ack_timeout": 0, 00:13:39.910 "ctrlr_loss_timeout_sec": 0, 00:13:39.910 "reconnect_delay_sec": 0, 00:13:39.910 "fast_io_fail_timeout_sec": 0, 00:13:39.910 "disable_auto_failback": false, 00:13:39.910 "generate_uuids": false, 00:13:39.910 "transport_tos": 0, 00:13:39.910 "nvme_error_stat": false, 00:13:39.910 "rdma_srq_size": 0, 00:13:39.910 "io_path_stat": false, 00:13:39.910 "allow_accel_sequence": false, 00:13:39.910 "rdma_max_cq_size": 0, 00:13:39.910 "rdma_cm_event_timeout_ms": 0, 00:13:39.910 "dhchap_digests": [ 00:13:39.910 "sha256", 00:13:39.910 "sha384", 00:13:39.910 "sha512" 00:13:39.910 ], 00:13:39.910 "dhchap_dhgroups": [ 00:13:39.910 "null", 00:13:39.910 "ffdhe2048", 00:13:39.910 "ffdhe3072", 00:13:39.910 "ffdhe4096", 00:13:39.910 "ffdhe6144", 00:13:39.910 "ffdhe8192" 00:13:39.910 ] 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "bdev_nvme_attach_controller", 00:13:39.910 "params": { 00:13:39.910 "name": "TLSTEST", 00:13:39.910 "trtype": "TCP", 00:13:39.910 "adrfam": "IPv4", 00:13:39.910 "traddr": "10.0.0.3", 00:13:39.910 "trsvcid": "4420", 00:13:39.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:39.910 "prchk_reftag": false, 00:13:39.910 "prchk_guard": false, 00:13:39.910 "ctrlr_loss_timeout_sec": 0, 00:13:39.910 "reconnect_delay_sec": 0, 00:13:39.910 "fast_io_fail_timeout_sec": 0, 00:13:39.910 "psk": "key0", 00:13:39.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:39.910 "hdgst": false, 00:13:39.910 "ddgst": false, 00:13:39.910 "multipath": "multipath" 00:13:39.910 } 00:13:39.910 }, 00:13:39.910 { 00:13:39.910 "method": "bdev_nvme_set_hotplug", 00:13:39.910 "params": { 00:13:39.910 "period_us": 100000, 00:13:39.910 "enable": false 00:13:39.911 } 00:13:39.911 }, 00:13:39.911 { 00:13:39.911 "method": "bdev_wait_for_examine" 00:13:39.911 } 00:13:39.911 ] 00:13:39.911 }, 00:13:39.911 { 00:13:39.911 "subsystem": "nbd", 00:13:39.911 "config": [] 00:13:39.911 } 00:13:39.911 ] 00:13:39.911 }' 00:13:39.911 [2024-11-05 09:36:25.613150] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:39.911 [2024-11-05 09:36:25.613276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71896 ] 00:13:39.911 [2024-11-05 09:36:25.767684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:39.911 [2024-11-05 09:36:25.807620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.169 [2024-11-05 09:36:25.923762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:40.169 [2024-11-05 09:36:25.958924] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:40.736 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:40.736 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:40.736 09:36:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:13:40.994 Running I/O for 10 seconds... 00:13:42.871 4209.00 IOPS, 16.44 MiB/s [2024-11-05T09:36:29.764Z] 4249.50 IOPS, 16.60 MiB/s [2024-11-05T09:36:31.136Z] 4224.00 IOPS, 16.50 MiB/s [2024-11-05T09:36:32.071Z] 4202.25 IOPS, 16.42 MiB/s [2024-11-05T09:36:33.006Z] 4198.60 IOPS, 16.40 MiB/s [2024-11-05T09:36:33.940Z] 4203.17 IOPS, 16.42 MiB/s [2024-11-05T09:36:34.992Z] 4207.71 IOPS, 16.44 MiB/s [2024-11-05T09:36:35.927Z] 4202.62 IOPS, 16.42 MiB/s [2024-11-05T09:36:36.863Z] 4205.89 IOPS, 16.43 MiB/s [2024-11-05T09:36:36.863Z] 4211.50 IOPS, 16.45 MiB/s 00:13:50.905 Latency(us) 00:13:50.905 [2024-11-05T09:36:36.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.905 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:13:50.905 Verification LBA range: start 0x0 length 0x2000 00:13:50.905 TLSTESTn1 : 10.02 4214.89 16.46 0.00 0.00 30302.63 6285.50 27167.65 00:13:50.905 [2024-11-05T09:36:36.863Z] =================================================================================================================== 00:13:50.905 [2024-11-05T09:36:36.863Z] Total : 4214.89 16.46 0.00 0.00 30302.63 6285.50 27167.65 00:13:50.905 { 00:13:50.905 "results": [ 00:13:50.905 { 00:13:50.905 "job": "TLSTESTn1", 00:13:50.905 "core_mask": "0x4", 00:13:50.905 "workload": "verify", 00:13:50.905 "status": "finished", 00:13:50.905 "verify_range": { 00:13:50.905 "start": 0, 00:13:50.905 "length": 8192 00:13:50.905 }, 00:13:50.905 "queue_depth": 128, 00:13:50.905 "io_size": 4096, 00:13:50.905 "runtime": 10.022315, 00:13:50.905 "iops": 4214.894463005802, 00:13:50.905 "mibps": 16.464431496116415, 00:13:50.905 "io_failed": 0, 00:13:50.905 "io_timeout": 0, 00:13:50.905 "avg_latency_us": 30302.628166301893, 00:13:50.905 "min_latency_us": 6285.498181818181, 00:13:50.905 "max_latency_us": 27167.65090909091 00:13:50.905 } 00:13:50.905 ], 00:13:50.905 "core_count": 1 00:13:50.905 } 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 71896 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71896 ']' 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71896 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71896 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:13:50.905 killing process with pid 71896 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71896' 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71896 00:13:50.905 Received shutdown signal, test time was about 10.000000 seconds 00:13:50.905 00:13:50.905 Latency(us) 00:13:50.905 [2024-11-05T09:36:36.863Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:50.905 [2024-11-05T09:36:36.863Z] =================================================================================================================== 00:13:50.905 [2024-11-05T09:36:36.863Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:50.905 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71896 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 71864 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 71864 ']' 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 71864 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71864 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:51.164 killing process with pid 71864 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71864' 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 71864 00:13:51.164 09:36:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 71864 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72029 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72029 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72029 ']' 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:51.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:51.423 09:36:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:51.423 [2024-11-05 09:36:37.197924] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:51.423 [2024-11-05 09:36:37.198047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.423 [2024-11-05 09:36:37.349189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:51.423 [2024-11-05 09:36:37.382296] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.423 [2024-11-05 09:36:37.382379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.423 [2024-11-05 09:36:37.382406] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:51.423 [2024-11-05 09:36:37.382413] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:51.423 [2024-11-05 09:36:37.382420] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.423 [2024-11-05 09:36:37.382732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.682 [2024-11-05 09:36:37.413206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:52.248 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:52.248 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:52.248 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.248 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:52.248 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:52.506 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.506 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Tb7u5KQtIo 00:13:52.506 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Tb7u5KQtIo 00:13:52.506 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:13:52.764 [2024-11-05 09:36:38.485588] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.764 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:13:53.022 09:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:13:53.281 [2024-11-05 09:36:39.061773] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:53.281 [2024-11-05 09:36:39.062074] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:53.281 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:13:53.539 malloc0 00:13:53.539 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:53.810 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:54.076 09:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=72085 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 72085 /var/tmp/bdevperf.sock 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72085 ']' 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:54.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:54.334 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:54.334 [2024-11-05 09:36:40.152336] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:54.334 [2024-11-05 09:36:40.152446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72085 ] 00:13:54.591 [2024-11-05 09:36:40.305789] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.591 [2024-11-05 09:36:40.346110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:54.591 [2024-11-05 09:36:40.380129] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:54.591 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:54.591 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:54.591 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:54.850 09:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:55.108 [2024-11-05 09:36:41.046155] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:55.366 nvme0n1 00:13:55.366 09:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:55.366 Running I/O for 1 seconds... 00:13:56.741 3840.00 IOPS, 15.00 MiB/s 00:13:56.741 Latency(us) 00:13:56.741 [2024-11-05T09:36:42.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.741 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.741 Verification LBA range: start 0x0 length 0x2000 00:13:56.741 nvme0n1 : 1.03 3840.18 15.00 0.00 0.00 32954.93 6642.97 23116.33 00:13:56.741 [2024-11-05T09:36:42.699Z] =================================================================================================================== 00:13:56.741 [2024-11-05T09:36:42.699Z] Total : 3840.18 15.00 0.00 0.00 32954.93 6642.97 23116.33 00:13:56.741 { 00:13:56.741 "results": [ 00:13:56.741 { 00:13:56.741 "job": "nvme0n1", 00:13:56.741 "core_mask": "0x2", 00:13:56.741 "workload": "verify", 00:13:56.741 "status": "finished", 00:13:56.741 "verify_range": { 00:13:56.741 "start": 0, 00:13:56.741 "length": 8192 00:13:56.741 }, 00:13:56.741 "queue_depth": 128, 00:13:56.741 "io_size": 4096, 00:13:56.741 "runtime": 1.033284, 00:13:56.741 "iops": 3840.183337785159, 00:13:56.741 "mibps": 15.000716163223277, 00:13:56.741 "io_failed": 0, 00:13:56.741 "io_timeout": 0, 00:13:56.741 "avg_latency_us": 32954.92504398827, 00:13:56.741 "min_latency_us": 6642.967272727273, 00:13:56.741 "max_latency_us": 23116.334545454545 00:13:56.741 } 00:13:56.741 ], 00:13:56.741 "core_count": 1 00:13:56.741 } 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 72085 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72085 ']' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72085 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72085 00:13:56.741 killing process with pid 72085 00:13:56.741 Received shutdown signal, test time was about 1.000000 seconds 00:13:56.741 00:13:56.741 Latency(us) 00:13:56.741 [2024-11-05T09:36:42.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.741 [2024-11-05T09:36:42.699Z] =================================================================================================================== 00:13:56.741 [2024-11-05T09:36:42.699Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72085' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72085 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72085 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 72029 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72029 ']' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72029 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72029 00:13:56.741 killing process with pid 72029 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72029' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72029 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72029 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72134 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72134 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72134 ']' 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:56.741 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.000 [2024-11-05 09:36:42.728668] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:57.000 [2024-11-05 09:36:42.728996] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.000 [2024-11-05 09:36:42.872550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.000 [2024-11-05 09:36:42.905317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.000 [2024-11-05 09:36:42.905583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.000 [2024-11-05 09:36:42.905606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.000 [2024-11-05 09:36:42.905617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.000 [2024-11-05 09:36:42.905625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.000 [2024-11-05 09:36:42.905937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.000 [2024-11-05 09:36:42.937495] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.259 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.259 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:57.259 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:57.259 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.259 09:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 [2024-11-05 09:36:43.029128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.259 malloc0 00:13:57.259 [2024-11-05 09:36:43.055809] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:13:57.259 [2024-11-05 09:36:43.056049] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:57.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=72153 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 72153 /var/tmp/bdevperf.sock 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72153 ']' 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:57.259 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:57.259 [2024-11-05 09:36:43.144123] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:13:57.259 [2024-11-05 09:36:43.144493] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72153 ] 00:13:57.517 [2024-11-05 09:36:43.293578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.517 [2024-11-05 09:36:43.327603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.517 [2024-11-05 09:36:43.358233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:57.517 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:57.517 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:13:57.517 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Tb7u5KQtIo 00:13:57.775 09:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:13:58.292 [2024-11-05 09:36:43.993892] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:13:58.292 nvme0n1 00:13:58.292 09:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:58.292 Running I/O for 1 seconds... 00:13:59.666 3968.00 IOPS, 15.50 MiB/s 00:13:59.666 Latency(us) 00:13:59.666 [2024-11-05T09:36:45.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.666 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:59.666 Verification LBA range: start 0x0 length 0x2000 00:13:59.666 nvme0n1 : 1.03 3989.15 15.58 0.00 0.00 31753.80 7119.59 19899.11 00:13:59.666 [2024-11-05T09:36:45.624Z] =================================================================================================================== 00:13:59.666 [2024-11-05T09:36:45.624Z] Total : 3989.15 15.58 0.00 0.00 31753.80 7119.59 19899.11 00:13:59.666 { 00:13:59.666 "results": [ 00:13:59.666 { 00:13:59.666 "job": "nvme0n1", 00:13:59.666 "core_mask": "0x2", 00:13:59.666 "workload": "verify", 00:13:59.666 "status": "finished", 00:13:59.666 "verify_range": { 00:13:59.666 "start": 0, 00:13:59.666 "length": 8192 00:13:59.666 }, 00:13:59.666 "queue_depth": 128, 00:13:59.666 "io_size": 4096, 00:13:59.666 "runtime": 1.026784, 00:13:59.666 "iops": 3989.1544862405335, 00:13:59.666 "mibps": 15.582634711877084, 00:13:59.666 "io_failed": 0, 00:13:59.666 "io_timeout": 0, 00:13:59.666 "avg_latency_us": 31753.8, 00:13:59.666 "min_latency_us": 7119.592727272728, 00:13:59.666 "max_latency_us": 19899.112727272728 00:13:59.666 } 00:13:59.666 ], 00:13:59.666 "core_count": 1 00:13:59.666 } 00:13:59.666 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:13:59.666 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.666 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:13:59.666 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.666 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:13:59.666 "subsystems": [ 00:13:59.666 { 00:13:59.666 "subsystem": "keyring", 00:13:59.666 "config": [ 00:13:59.666 { 00:13:59.666 "method": "keyring_file_add_key", 00:13:59.666 "params": { 00:13:59.666 "name": "key0", 00:13:59.666 "path": "/tmp/tmp.Tb7u5KQtIo" 00:13:59.666 } 00:13:59.666 } 00:13:59.666 ] 00:13:59.666 }, 00:13:59.666 { 00:13:59.666 "subsystem": "iobuf", 00:13:59.666 "config": [ 00:13:59.666 { 00:13:59.666 "method": "iobuf_set_options", 00:13:59.666 "params": { 00:13:59.666 "small_pool_count": 8192, 00:13:59.666 "large_pool_count": 1024, 00:13:59.666 "small_bufsize": 8192, 00:13:59.666 "large_bufsize": 135168, 00:13:59.666 "enable_numa": false 00:13:59.666 } 00:13:59.666 } 00:13:59.666 ] 00:13:59.666 }, 00:13:59.666 { 00:13:59.666 "subsystem": "sock", 00:13:59.666 "config": [ 00:13:59.666 { 00:13:59.666 "method": "sock_set_default_impl", 00:13:59.666 "params": { 00:13:59.666 "impl_name": "uring" 00:13:59.666 } 00:13:59.666 }, 00:13:59.666 { 00:13:59.666 "method": "sock_impl_set_options", 00:13:59.666 "params": { 00:13:59.666 "impl_name": "ssl", 00:13:59.667 "recv_buf_size": 4096, 00:13:59.667 "send_buf_size": 4096, 00:13:59.667 "enable_recv_pipe": true, 00:13:59.667 "enable_quickack": false, 00:13:59.667 "enable_placement_id": 0, 00:13:59.667 "enable_zerocopy_send_server": true, 00:13:59.667 "enable_zerocopy_send_client": false, 00:13:59.667 "zerocopy_threshold": 0, 00:13:59.667 "tls_version": 0, 00:13:59.667 "enable_ktls": false 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "sock_impl_set_options", 00:13:59.667 "params": { 00:13:59.667 "impl_name": "posix", 00:13:59.667 "recv_buf_size": 2097152, 00:13:59.667 "send_buf_size": 2097152, 00:13:59.667 "enable_recv_pipe": true, 00:13:59.667 "enable_quickack": false, 00:13:59.667 "enable_placement_id": 0, 00:13:59.667 "enable_zerocopy_send_server": true, 00:13:59.667 "enable_zerocopy_send_client": false, 00:13:59.667 "zerocopy_threshold": 0, 00:13:59.667 "tls_version": 0, 00:13:59.667 "enable_ktls": false 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "sock_impl_set_options", 00:13:59.667 "params": { 00:13:59.667 "impl_name": "uring", 00:13:59.667 "recv_buf_size": 2097152, 00:13:59.667 "send_buf_size": 2097152, 00:13:59.667 "enable_recv_pipe": true, 00:13:59.667 "enable_quickack": false, 00:13:59.667 "enable_placement_id": 0, 00:13:59.667 "enable_zerocopy_send_server": false, 00:13:59.667 "enable_zerocopy_send_client": false, 00:13:59.667 "zerocopy_threshold": 0, 00:13:59.667 "tls_version": 0, 00:13:59.667 "enable_ktls": false 00:13:59.667 } 00:13:59.667 } 00:13:59.667 ] 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "subsystem": "vmd", 00:13:59.667 "config": [] 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "subsystem": "accel", 00:13:59.667 "config": [ 00:13:59.667 { 00:13:59.667 "method": "accel_set_options", 00:13:59.667 "params": { 00:13:59.667 "small_cache_size": 128, 00:13:59.667 "large_cache_size": 16, 00:13:59.667 "task_count": 2048, 00:13:59.667 "sequence_count": 2048, 00:13:59.667 "buf_count": 2048 00:13:59.667 } 00:13:59.667 } 00:13:59.667 ] 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "subsystem": "bdev", 00:13:59.667 "config": [ 00:13:59.667 { 00:13:59.667 "method": "bdev_set_options", 00:13:59.667 "params": { 00:13:59.667 "bdev_io_pool_size": 65535, 00:13:59.667 "bdev_io_cache_size": 256, 00:13:59.667 "bdev_auto_examine": true, 00:13:59.667 "iobuf_small_cache_size": 128, 00:13:59.667 "iobuf_large_cache_size": 16 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "bdev_raid_set_options", 00:13:59.667 "params": { 00:13:59.667 "process_window_size_kb": 1024, 00:13:59.667 "process_max_bandwidth_mb_sec": 0 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "bdev_iscsi_set_options", 00:13:59.667 "params": { 00:13:59.667 "timeout_sec": 30 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "bdev_nvme_set_options", 00:13:59.667 "params": { 00:13:59.667 "action_on_timeout": "none", 00:13:59.667 "timeout_us": 0, 00:13:59.667 "timeout_admin_us": 0, 00:13:59.667 "keep_alive_timeout_ms": 10000, 00:13:59.667 "arbitration_burst": 0, 00:13:59.667 "low_priority_weight": 0, 00:13:59.667 "medium_priority_weight": 0, 00:13:59.667 "high_priority_weight": 0, 00:13:59.667 "nvme_adminq_poll_period_us": 10000, 00:13:59.667 "nvme_ioq_poll_period_us": 0, 00:13:59.667 "io_queue_requests": 0, 00:13:59.667 "delay_cmd_submit": true, 00:13:59.667 "transport_retry_count": 4, 00:13:59.667 "bdev_retry_count": 3, 00:13:59.667 "transport_ack_timeout": 0, 00:13:59.667 "ctrlr_loss_timeout_sec": 0, 00:13:59.667 "reconnect_delay_sec": 0, 00:13:59.667 "fast_io_fail_timeout_sec": 0, 00:13:59.667 "disable_auto_failback": false, 00:13:59.667 "generate_uuids": false, 00:13:59.667 "transport_tos": 0, 00:13:59.667 "nvme_error_stat": false, 00:13:59.667 "rdma_srq_size": 0, 00:13:59.667 "io_path_stat": false, 00:13:59.667 "allow_accel_sequence": false, 00:13:59.667 "rdma_max_cq_size": 0, 00:13:59.667 "rdma_cm_event_timeout_ms": 0, 00:13:59.667 "dhchap_digests": [ 00:13:59.667 "sha256", 00:13:59.667 "sha384", 00:13:59.667 "sha512" 00:13:59.667 ], 00:13:59.667 "dhchap_dhgroups": [ 00:13:59.667 "null", 00:13:59.667 "ffdhe2048", 00:13:59.667 "ffdhe3072", 00:13:59.667 "ffdhe4096", 00:13:59.667 "ffdhe6144", 00:13:59.667 "ffdhe8192" 00:13:59.667 ] 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "bdev_nvme_set_hotplug", 00:13:59.667 "params": { 00:13:59.667 "period_us": 100000, 00:13:59.667 "enable": false 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "bdev_malloc_create", 00:13:59.667 "params": { 00:13:59.667 "name": "malloc0", 00:13:59.667 "num_blocks": 8192, 00:13:59.667 "block_size": 4096, 00:13:59.667 "physical_block_size": 4096, 00:13:59.667 "uuid": "81a223ca-b0e4-46ea-8ebd-0669880aa806", 00:13:59.667 "optimal_io_boundary": 0, 00:13:59.667 "md_size": 0, 00:13:59.667 "dif_type": 0, 00:13:59.667 "dif_is_head_of_md": false, 00:13:59.667 "dif_pi_format": 0 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "bdev_wait_for_examine" 00:13:59.667 } 00:13:59.667 ] 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "subsystem": "nbd", 00:13:59.667 "config": [] 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "subsystem": "scheduler", 00:13:59.667 "config": [ 00:13:59.667 { 00:13:59.667 "method": "framework_set_scheduler", 00:13:59.667 "params": { 00:13:59.667 "name": "static" 00:13:59.667 } 00:13:59.667 } 00:13:59.667 ] 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "subsystem": "nvmf", 00:13:59.667 "config": [ 00:13:59.667 { 00:13:59.667 "method": "nvmf_set_config", 00:13:59.667 "params": { 00:13:59.667 "discovery_filter": "match_any", 00:13:59.667 "admin_cmd_passthru": { 00:13:59.667 "identify_ctrlr": false 00:13:59.667 }, 00:13:59.667 "dhchap_digests": [ 00:13:59.667 "sha256", 00:13:59.667 "sha384", 00:13:59.667 "sha512" 00:13:59.667 ], 00:13:59.667 "dhchap_dhgroups": [ 00:13:59.667 "null", 00:13:59.667 "ffdhe2048", 00:13:59.667 "ffdhe3072", 00:13:59.667 "ffdhe4096", 00:13:59.667 "ffdhe6144", 00:13:59.667 "ffdhe8192" 00:13:59.667 ] 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "nvmf_set_max_subsystems", 00:13:59.667 "params": { 00:13:59.667 "max_subsystems": 1024 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "nvmf_set_crdt", 00:13:59.667 "params": { 00:13:59.667 "crdt1": 0, 00:13:59.667 "crdt2": 0, 00:13:59.667 "crdt3": 0 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "nvmf_create_transport", 00:13:59.667 "params": { 00:13:59.667 "trtype": "TCP", 00:13:59.667 "max_queue_depth": 128, 00:13:59.667 "max_io_qpairs_per_ctrlr": 127, 00:13:59.667 "in_capsule_data_size": 4096, 00:13:59.667 "max_io_size": 131072, 00:13:59.667 "io_unit_size": 131072, 00:13:59.667 "max_aq_depth": 128, 00:13:59.667 "num_shared_buffers": 511, 00:13:59.667 "buf_cache_size": 4294967295, 00:13:59.667 "dif_insert_or_strip": false, 00:13:59.667 "zcopy": false, 00:13:59.667 "c2h_success": false, 00:13:59.667 "sock_priority": 0, 00:13:59.667 "abort_timeout_sec": 1, 00:13:59.667 "ack_timeout": 0, 00:13:59.667 "data_wr_pool_size": 0 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "nvmf_create_subsystem", 00:13:59.667 "params": { 00:13:59.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.667 "allow_any_host": false, 00:13:59.667 "serial_number": "00000000000000000000", 00:13:59.667 "model_number": "SPDK bdev Controller", 00:13:59.667 "max_namespaces": 32, 00:13:59.667 "min_cntlid": 1, 00:13:59.667 "max_cntlid": 65519, 00:13:59.667 "ana_reporting": false 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "nvmf_subsystem_add_host", 00:13:59.667 "params": { 00:13:59.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.667 "host": "nqn.2016-06.io.spdk:host1", 00:13:59.667 "psk": "key0" 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "nvmf_subsystem_add_ns", 00:13:59.667 "params": { 00:13:59.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.667 "namespace": { 00:13:59.667 "nsid": 1, 00:13:59.667 "bdev_name": "malloc0", 00:13:59.667 "nguid": "81A223CAB0E446EA8EBD0669880AA806", 00:13:59.667 "uuid": "81a223ca-b0e4-46ea-8ebd-0669880aa806", 00:13:59.667 "no_auto_visible": false 00:13:59.667 } 00:13:59.667 } 00:13:59.667 }, 00:13:59.667 { 00:13:59.667 "method": "nvmf_subsystem_add_listener", 00:13:59.667 "params": { 00:13:59.667 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.667 "listen_address": { 00:13:59.667 "trtype": "TCP", 00:13:59.667 "adrfam": "IPv4", 00:13:59.667 "traddr": "10.0.0.3", 00:13:59.667 "trsvcid": "4420" 00:13:59.667 }, 00:13:59.667 "secure_channel": false, 00:13:59.667 "sock_impl": "ssl" 00:13:59.667 } 00:13:59.667 } 00:13:59.667 ] 00:13:59.667 } 00:13:59.667 ] 00:13:59.667 }' 00:13:59.668 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:13:59.926 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:13:59.926 "subsystems": [ 00:13:59.926 { 00:13:59.926 "subsystem": "keyring", 00:13:59.926 "config": [ 00:13:59.926 { 00:13:59.926 "method": "keyring_file_add_key", 00:13:59.926 "params": { 00:13:59.927 "name": "key0", 00:13:59.927 "path": "/tmp/tmp.Tb7u5KQtIo" 00:13:59.927 } 00:13:59.927 } 00:13:59.927 ] 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "subsystem": "iobuf", 00:13:59.927 "config": [ 00:13:59.927 { 00:13:59.927 "method": "iobuf_set_options", 00:13:59.927 "params": { 00:13:59.927 "small_pool_count": 8192, 00:13:59.927 "large_pool_count": 1024, 00:13:59.927 "small_bufsize": 8192, 00:13:59.927 "large_bufsize": 135168, 00:13:59.927 "enable_numa": false 00:13:59.927 } 00:13:59.927 } 00:13:59.927 ] 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "subsystem": "sock", 00:13:59.927 "config": [ 00:13:59.927 { 00:13:59.927 "method": "sock_set_default_impl", 00:13:59.927 "params": { 00:13:59.927 "impl_name": "uring" 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "sock_impl_set_options", 00:13:59.927 "params": { 00:13:59.927 "impl_name": "ssl", 00:13:59.927 "recv_buf_size": 4096, 00:13:59.927 "send_buf_size": 4096, 00:13:59.927 "enable_recv_pipe": true, 00:13:59.927 "enable_quickack": false, 00:13:59.927 "enable_placement_id": 0, 00:13:59.927 "enable_zerocopy_send_server": true, 00:13:59.927 "enable_zerocopy_send_client": false, 00:13:59.927 "zerocopy_threshold": 0, 00:13:59.927 "tls_version": 0, 00:13:59.927 "enable_ktls": false 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "sock_impl_set_options", 00:13:59.927 "params": { 00:13:59.927 "impl_name": "posix", 00:13:59.927 "recv_buf_size": 2097152, 00:13:59.927 "send_buf_size": 2097152, 00:13:59.927 "enable_recv_pipe": true, 00:13:59.927 "enable_quickack": false, 00:13:59.927 "enable_placement_id": 0, 00:13:59.927 "enable_zerocopy_send_server": true, 00:13:59.927 "enable_zerocopy_send_client": false, 00:13:59.927 "zerocopy_threshold": 0, 00:13:59.927 "tls_version": 0, 00:13:59.927 "enable_ktls": false 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "sock_impl_set_options", 00:13:59.927 "params": { 00:13:59.927 "impl_name": "uring", 00:13:59.927 "recv_buf_size": 2097152, 00:13:59.927 "send_buf_size": 2097152, 00:13:59.927 "enable_recv_pipe": true, 00:13:59.927 "enable_quickack": false, 00:13:59.927 "enable_placement_id": 0, 00:13:59.927 "enable_zerocopy_send_server": false, 00:13:59.927 "enable_zerocopy_send_client": false, 00:13:59.927 "zerocopy_threshold": 0, 00:13:59.927 "tls_version": 0, 00:13:59.927 "enable_ktls": false 00:13:59.927 } 00:13:59.927 } 00:13:59.927 ] 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "subsystem": "vmd", 00:13:59.927 "config": [] 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "subsystem": "accel", 00:13:59.927 "config": [ 00:13:59.927 { 00:13:59.927 "method": "accel_set_options", 00:13:59.927 "params": { 00:13:59.927 "small_cache_size": 128, 00:13:59.927 "large_cache_size": 16, 00:13:59.927 "task_count": 2048, 00:13:59.927 "sequence_count": 2048, 00:13:59.927 "buf_count": 2048 00:13:59.927 } 00:13:59.927 } 00:13:59.927 ] 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "subsystem": "bdev", 00:13:59.927 "config": [ 00:13:59.927 { 00:13:59.927 "method": "bdev_set_options", 00:13:59.927 "params": { 00:13:59.927 "bdev_io_pool_size": 65535, 00:13:59.927 "bdev_io_cache_size": 256, 00:13:59.927 "bdev_auto_examine": true, 00:13:59.927 "iobuf_small_cache_size": 128, 00:13:59.927 "iobuf_large_cache_size": 16 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "bdev_raid_set_options", 00:13:59.927 "params": { 00:13:59.927 "process_window_size_kb": 1024, 00:13:59.927 "process_max_bandwidth_mb_sec": 0 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "bdev_iscsi_set_options", 00:13:59.927 "params": { 00:13:59.927 "timeout_sec": 30 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "bdev_nvme_set_options", 00:13:59.927 "params": { 00:13:59.927 "action_on_timeout": "none", 00:13:59.927 "timeout_us": 0, 00:13:59.927 "timeout_admin_us": 0, 00:13:59.927 "keep_alive_timeout_ms": 10000, 00:13:59.927 "arbitration_burst": 0, 00:13:59.927 "low_priority_weight": 0, 00:13:59.927 "medium_priority_weight": 0, 00:13:59.927 "high_priority_weight": 0, 00:13:59.927 "nvme_adminq_poll_period_us": 10000, 00:13:59.927 "nvme_ioq_poll_period_us": 0, 00:13:59.927 "io_queue_requests": 512, 00:13:59.927 "delay_cmd_submit": true, 00:13:59.927 "transport_retry_count": 4, 00:13:59.927 "bdev_retry_count": 3, 00:13:59.927 "transport_ack_timeout": 0, 00:13:59.927 "ctrlr_loss_timeout_sec": 0, 00:13:59.927 "reconnect_delay_sec": 0, 00:13:59.927 "fast_io_fail_timeout_sec": 0, 00:13:59.927 "disable_auto_failback": false, 00:13:59.927 "generate_uuids": false, 00:13:59.927 "transport_tos": 0, 00:13:59.927 "nvme_error_stat": false, 00:13:59.927 "rdma_srq_size": 0, 00:13:59.927 "io_path_stat": false, 00:13:59.927 "allow_accel_sequence": false, 00:13:59.927 "rdma_max_cq_size": 0, 00:13:59.927 "rdma_cm_event_timeout_ms": 0, 00:13:59.927 "dhchap_digests": [ 00:13:59.927 "sha256", 00:13:59.927 "sha384", 00:13:59.927 "sha512" 00:13:59.927 ], 00:13:59.927 "dhchap_dhgroups": [ 00:13:59.927 "null", 00:13:59.927 "ffdhe2048", 00:13:59.927 "ffdhe3072", 00:13:59.927 "ffdhe4096", 00:13:59.927 "ffdhe6144", 00:13:59.927 "ffdhe8192" 00:13:59.927 ] 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "bdev_nvme_attach_controller", 00:13:59.927 "params": { 00:13:59.927 "name": "nvme0", 00:13:59.927 "trtype": "TCP", 00:13:59.927 "adrfam": "IPv4", 00:13:59.927 "traddr": "10.0.0.3", 00:13:59.927 "trsvcid": "4420", 00:13:59.927 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:59.927 "prchk_reftag": false, 00:13:59.927 "prchk_guard": false, 00:13:59.927 "ctrlr_loss_timeout_sec": 0, 00:13:59.927 "reconnect_delay_sec": 0, 00:13:59.927 "fast_io_fail_timeout_sec": 0, 00:13:59.927 "psk": "key0", 00:13:59.927 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:59.927 "hdgst": false, 00:13:59.927 "ddgst": false, 00:13:59.927 "multipath": "multipath" 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "bdev_nvme_set_hotplug", 00:13:59.927 "params": { 00:13:59.927 "period_us": 100000, 00:13:59.927 "enable": false 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "bdev_enable_histogram", 00:13:59.927 "params": { 00:13:59.927 "name": "nvme0n1", 00:13:59.927 "enable": true 00:13:59.927 } 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "method": "bdev_wait_for_examine" 00:13:59.927 } 00:13:59.927 ] 00:13:59.927 }, 00:13:59.927 { 00:13:59.927 "subsystem": "nbd", 00:13:59.927 "config": [] 00:13:59.927 } 00:13:59.927 ] 00:13:59.927 }' 00:13:59.927 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 72153 00:13:59.927 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72153 ']' 00:13:59.927 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72153 00:13:59.927 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:13:59.927 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:59.927 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72153 00:13:59.927 killing process with pid 72153 00:13:59.927 Received shutdown signal, test time was about 1.000000 seconds 00:13:59.927 00:13:59.927 Latency(us) 00:13:59.927 [2024-11-05T09:36:45.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.927 [2024-11-05T09:36:45.885Z] =================================================================================================================== 00:13:59.927 [2024-11-05T09:36:45.885Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:59.927 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:13:59.928 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:13:59.928 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72153' 00:13:59.928 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72153 00:13:59.928 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72153 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 72134 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72134 ']' 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72134 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72134 00:14:00.187 killing process with pid 72134 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72134' 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72134 00:14:00.187 09:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72134 00:14:00.187 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:14:00.187 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:00.187 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:00.187 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:14:00.187 "subsystems": [ 00:14:00.187 { 00:14:00.187 "subsystem": "keyring", 00:14:00.187 "config": [ 00:14:00.187 { 00:14:00.187 "method": "keyring_file_add_key", 00:14:00.187 "params": { 00:14:00.187 "name": "key0", 00:14:00.187 "path": "/tmp/tmp.Tb7u5KQtIo" 00:14:00.187 } 00:14:00.187 } 00:14:00.187 ] 00:14:00.187 }, 00:14:00.187 { 00:14:00.187 "subsystem": "iobuf", 00:14:00.187 "config": [ 00:14:00.187 { 00:14:00.187 "method": "iobuf_set_options", 00:14:00.187 "params": { 00:14:00.187 "small_pool_count": 8192, 00:14:00.187 "large_pool_count": 1024, 00:14:00.187 "small_bufsize": 8192, 00:14:00.187 "large_bufsize": 135168, 00:14:00.187 "enable_numa": false 00:14:00.188 } 00:14:00.188 } 00:14:00.188 ] 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "subsystem": "sock", 00:14:00.188 "config": [ 00:14:00.188 { 00:14:00.188 "method": "sock_set_default_impl", 00:14:00.188 "params": { 00:14:00.188 "impl_name": "uring" 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "sock_impl_set_options", 00:14:00.188 "params": { 00:14:00.188 "impl_name": "ssl", 00:14:00.188 "recv_buf_size": 4096, 00:14:00.188 "send_buf_size": 4096, 00:14:00.188 "enable_recv_pipe": true, 00:14:00.188 "enable_quickack": false, 00:14:00.188 "enable_placement_id": 0, 00:14:00.188 "enable_zerocopy_send_server": true, 00:14:00.188 "enable_zerocopy_send_client": false, 00:14:00.188 "zerocopy_threshold": 0, 00:14:00.188 "tls_version": 0, 00:14:00.188 "enable_ktls": false 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "sock_impl_set_options", 00:14:00.188 "params": { 00:14:00.188 "impl_name": "posix", 00:14:00.188 "recv_buf_size": 2097152, 00:14:00.188 "send_buf_size": 2097152, 00:14:00.188 "enable_recv_pipe": true, 00:14:00.188 "enable_quickack": false, 00:14:00.188 "enable_placement_id": 0, 00:14:00.188 "enable_zerocopy_send_server": true, 00:14:00.188 "enable_zerocopy_send_client": false, 00:14:00.188 "zerocopy_threshold": 0, 00:14:00.188 "tls_version": 0, 00:14:00.188 "enable_ktls": false 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "sock_impl_set_options", 00:14:00.188 "params": { 00:14:00.188 "impl_name": "uring", 00:14:00.188 "recv_buf_size": 2097152, 00:14:00.188 "send_buf_size": 2097152, 00:14:00.188 "enable_recv_pipe": true, 00:14:00.188 "enable_quickack": false, 00:14:00.188 "enable_placement_id": 0, 00:14:00.188 "enable_zerocopy_send_server": false, 00:14:00.188 "enable_zerocopy_send_client": false, 00:14:00.188 "zerocopy_threshold": 0, 00:14:00.188 "tls_version": 0, 00:14:00.188 "enable_ktls": false 00:14:00.188 } 00:14:00.188 } 00:14:00.188 ] 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "subsystem": "vmd", 00:14:00.188 "config": [] 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "subsystem": "accel", 00:14:00.188 "config": [ 00:14:00.188 { 00:14:00.188 "method": "accel_set_options", 00:14:00.188 "params": { 00:14:00.188 "small_cache_size": 128, 00:14:00.188 "large_cache_size": 16, 00:14:00.188 "task_count": 2048, 00:14:00.188 "sequence_count": 2048, 00:14:00.188 "buf_count": 2048 00:14:00.188 } 00:14:00.188 } 00:14:00.188 ] 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "subsystem": "bdev", 00:14:00.188 "config": [ 00:14:00.188 { 00:14:00.188 "method": "bdev_set_options", 00:14:00.188 "params": { 00:14:00.188 "bdev_io_pool_size": 65535, 00:14:00.188 "bdev_io_cache_size": 256, 00:14:00.188 "bdev_auto_examine": true, 00:14:00.188 "iobuf_small_cache_size": 128, 00:14:00.188 "iobuf_large_cache_size": 16 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "bdev_raid_set_options", 00:14:00.188 "params": { 00:14:00.188 "process_window_size_kb": 1024, 00:14:00.188 "process_max_bandwidth_mb_sec": 0 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "bdev_iscsi_set_options", 00:14:00.188 "params": { 00:14:00.188 "timeout_sec": 30 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "bdev_nvme_set_options", 00:14:00.188 "params": { 00:14:00.188 "action_on_timeout": "none", 00:14:00.188 "timeout_us": 0, 00:14:00.188 "timeout_admin_us": 0, 00:14:00.188 "keep_alive_timeout_ms": 10000, 00:14:00.188 "arbitration_burst": 0, 00:14:00.188 "low_priority_weight": 0, 00:14:00.188 "medium_priority_weight": 0, 00:14:00.188 "high_priority_weight": 0, 00:14:00.188 "nvme_adminq_poll_period_us": 10000, 00:14:00.188 "nvme_ioq_poll_period_us": 0, 00:14:00.188 "io_queue_requests": 0, 00:14:00.188 "delay_cmd_submit": true, 00:14:00.188 "transport_retry_count": 4, 00:14:00.188 "bdev_retry_count": 3, 00:14:00.188 "transport_ack_timeout": 0, 00:14:00.188 "ctrlr_loss_timeout_sec": 0, 00:14:00.188 "reconnect_delay_sec": 0, 00:14:00.188 "fast_io_fail_timeout_sec": 0, 00:14:00.188 "disable_auto_failback": false, 00:14:00.188 "generate_uuids": false, 00:14:00.188 "transport_tos": 0, 00:14:00.188 "nvme_error_stat": false, 00:14:00.188 "rdma_srq_size": 0, 00:14:00.188 "io_path_stat": false, 00:14:00.188 "allow_accel_sequence": false, 00:14:00.188 "rdma_max_cq_size": 0, 00:14:00.188 "rdma_cm_event_timeout_ms": 0, 00:14:00.188 "dhchap_digests": [ 00:14:00.188 "sha256", 00:14:00.188 "sha384", 00:14:00.188 "sha512" 00:14:00.188 ], 00:14:00.188 "dhchap_dhgroups": [ 00:14:00.188 "null", 00:14:00.188 "ffdhe2048", 00:14:00.188 "ffdhe3072", 00:14:00.188 "ffdhe4096", 00:14:00.188 "ffdhe6144", 00:14:00.188 "ffdhe8192" 00:14:00.188 ] 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "bdev_nvme_set_hotplug", 00:14:00.188 "params": { 00:14:00.188 "period_us": 100000, 00:14:00.188 "enable": false 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "bdev_malloc_create", 00:14:00.188 "params": { 00:14:00.188 "name": "malloc0", 00:14:00.188 "num_blocks": 8192, 00:14:00.188 "block_size": 4096, 00:14:00.188 "physical_block_size": 4096, 00:14:00.188 "uuid": "81a223ca-b0e4-46ea-8ebd-0669880aa806", 00:14:00.188 "optimal_io_boundary": 0, 00:14:00.188 "md_size": 0, 00:14:00.188 "dif_type": 0, 00:14:00.188 "dif_is_head_of_md": false, 00:14:00.188 "dif_pi_format": 0 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "bdev_wait_for_examine" 00:14:00.188 } 00:14:00.188 ] 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "subsystem": "nbd", 00:14:00.188 "config": [] 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "subsystem": "scheduler", 00:14:00.188 "config": [ 00:14:00.188 { 00:14:00.188 "method": "framework_set_scheduler", 00:14:00.188 "params": { 00:14:00.188 "name": "static" 00:14:00.188 } 00:14:00.188 } 00:14:00.188 ] 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "subsystem": "nvmf", 00:14:00.188 "config": [ 00:14:00.188 { 00:14:00.188 "method": "nvmf_set_config", 00:14:00.188 "params": { 00:14:00.188 "discovery_filter": "match_any", 00:14:00.188 "admin_cmd_passthru": { 00:14:00.188 "identify_ctrlr": false 00:14:00.188 }, 00:14:00.188 "dhchap_digests": [ 00:14:00.188 "sha256", 00:14:00.188 "sha384", 00:14:00.188 "sha512" 00:14:00.188 ], 00:14:00.188 "dhchap_dhgroups": [ 00:14:00.188 "null", 00:14:00.188 "ffdhe2048", 00:14:00.188 "ffdhe3072", 00:14:00.188 "ffdhe4096", 00:14:00.188 "ffdhe6144", 00:14:00.188 "ffdhe8192" 00:14:00.188 ] 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "nvmf_set_max_subsystems", 00:14:00.188 "params": { 00:14:00.188 "max_subsystems": 1024 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "nvmf_set_crdt", 00:14:00.188 "params": { 00:14:00.188 "crdt1": 0, 00:14:00.188 "crdt2": 0, 00:14:00.188 "crdt3": 0 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "nvmf_create_transport", 00:14:00.188 "params": { 00:14:00.188 "trtype": "TCP", 00:14:00.188 "max_queue_depth": 128, 00:14:00.188 "max_io_qpairs_per_ctrlr": 127, 00:14:00.188 "in_capsule_data_size": 4096, 00:14:00.188 "max_io_size": 131072, 00:14:00.188 "io_unit_size": 131072, 00:14:00.188 "max_aq_depth": 128, 00:14:00.188 "num_shared_buffers": 511, 00:14:00.188 "buf_cache_size": 4294967295, 00:14:00.188 "dif_insert_or_strip": false, 00:14:00.188 "zcopy": false, 00:14:00.188 "c2h_success": false, 00:14:00.188 "sock_priority": 0, 00:14:00.188 "abort_timeout_sec": 1, 00:14:00.188 "ack_timeout": 0, 00:14:00.188 "data_wr_pool_size": 0 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "nvmf_create_subsystem", 00:14:00.188 "params": { 00:14:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.188 "allow_any_host": false, 00:14:00.188 "serial_number": "00000000000000000000", 00:14:00.188 "model_number": "SPDK bdev Controller", 00:14:00.188 "max_namespaces": 32, 00:14:00.188 "min_cntlid": 1, 00:14:00.188 "max_cntlid": 65519, 00:14:00.188 "ana_reporting": false 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "nvmf_subsystem_add_host", 00:14:00.188 "params": { 00:14:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.188 "host": "nqn.2016-06.io.spdk:host1", 00:14:00.188 "psk": "key0" 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "nvmf_subsystem_add_ns", 00:14:00.188 "params": { 00:14:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.188 "namespace": { 00:14:00.188 "nsid": 1, 00:14:00.188 "bdev_name": "malloc0", 00:14:00.188 "nguid": "81A223CAB0E446EA8EBD0669880AA806", 00:14:00.188 "uuid": "81a223ca-b0e4-46ea-8ebd-0669880aa806", 00:14:00.188 "no_auto_visible": false 00:14:00.188 } 00:14:00.188 } 00:14:00.188 }, 00:14:00.188 { 00:14:00.188 "method": "nvmf_subsystem_add_listener", 00:14:00.188 "params": { 00:14:00.188 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:00.188 "listen_address": { 00:14:00.189 "trtype": "TCP", 00:14:00.189 "adrfam": "IPv4", 00:14:00.189 "traddr": "10.0.0.3", 00:14:00.189 "trsvcid": "4420" 00:14:00.189 }, 00:14:00.189 "secure_channel": false, 00:14:00.189 "sock_impl": "ssl" 00:14:00.189 } 00:14:00.189 } 00:14:00.189 ] 00:14:00.189 } 00:14:00.189 ] 00:14:00.189 }' 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=72206 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 72206 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72206 ']' 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:00.189 09:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:00.448 [2024-11-05 09:36:46.157483] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:00.448 [2024-11-05 09:36:46.157792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.448 [2024-11-05 09:36:46.299599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.448 [2024-11-05 09:36:46.328893] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.448 [2024-11-05 09:36:46.329186] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.448 [2024-11-05 09:36:46.329330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.448 [2024-11-05 09:36:46.329383] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.448 [2024-11-05 09:36:46.329489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.448 [2024-11-05 09:36:46.329870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.707 [2024-11-05 09:36:46.473300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:00.707 [2024-11-05 09:36:46.530154] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.707 [2024-11-05 09:36:46.562145] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:00.707 [2024-11-05 09:36:46.562400] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=72238 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 72238 /var/tmp/bdevperf.sock 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 72238 ']' 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:14:01.333 09:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:14:01.333 "subsystems": [ 00:14:01.333 { 00:14:01.333 "subsystem": "keyring", 00:14:01.334 "config": [ 00:14:01.334 { 00:14:01.334 "method": "keyring_file_add_key", 00:14:01.334 "params": { 00:14:01.334 "name": "key0", 00:14:01.334 "path": "/tmp/tmp.Tb7u5KQtIo" 00:14:01.334 } 00:14:01.334 } 00:14:01.334 ] 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "subsystem": "iobuf", 00:14:01.334 "config": [ 00:14:01.334 { 00:14:01.334 "method": "iobuf_set_options", 00:14:01.334 "params": { 00:14:01.334 "small_pool_count": 8192, 00:14:01.334 "large_pool_count": 1024, 00:14:01.334 "small_bufsize": 8192, 00:14:01.334 "large_bufsize": 135168, 00:14:01.334 "enable_numa": false 00:14:01.334 } 00:14:01.334 } 00:14:01.334 ] 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "subsystem": "sock", 00:14:01.334 "config": [ 00:14:01.334 { 00:14:01.334 "method": "sock_set_default_impl", 00:14:01.334 "params": { 00:14:01.334 "impl_name": "uring" 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "sock_impl_set_options", 00:14:01.334 "params": { 00:14:01.334 "impl_name": "ssl", 00:14:01.334 "recv_buf_size": 4096, 00:14:01.334 "send_buf_size": 4096, 00:14:01.334 "enable_recv_pipe": true, 00:14:01.334 "enable_quickack": false, 00:14:01.334 "enable_placement_id": 0, 00:14:01.334 "enable_zerocopy_send_server": true, 00:14:01.334 "enable_zerocopy_send_client": false, 00:14:01.334 "zerocopy_threshold": 0, 00:14:01.334 "tls_version": 0, 00:14:01.334 "enable_ktls": false 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "sock_impl_set_options", 00:14:01.334 "params": { 00:14:01.334 "impl_name": "posix", 00:14:01.334 "recv_buf_size": 2097152, 00:14:01.334 "send_buf_size": 2097152, 00:14:01.334 "enable_recv_pipe": true, 00:14:01.334 "enable_quickack": false, 00:14:01.334 "enable_placement_id": 0, 00:14:01.334 "enable_zerocopy_send_server": true, 00:14:01.334 "enable_zerocopy_send_client": false, 00:14:01.334 "zerocopy_threshold": 0, 00:14:01.334 "tls_version": 0, 00:14:01.334 "enable_ktls": false 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "sock_impl_set_options", 00:14:01.334 "params": { 00:14:01.334 "impl_name": "uring", 00:14:01.334 "recv_buf_size": 2097152, 00:14:01.334 "send_buf_size": 2097152, 00:14:01.334 "enable_recv_pipe": true, 00:14:01.334 "enable_quickack": false, 00:14:01.334 "enable_placement_id": 0, 00:14:01.334 "enable_zerocopy_send_server": false, 00:14:01.334 "enable_zerocopy_send_client": false, 00:14:01.334 "zerocopy_threshold": 0, 00:14:01.334 "tls_version": 0, 00:14:01.334 "enable_ktls": false 00:14:01.334 } 00:14:01.334 } 00:14:01.334 ] 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "subsystem": "vmd", 00:14:01.334 "config": [] 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "subsystem": "accel", 00:14:01.334 "config": [ 00:14:01.334 { 00:14:01.334 "method": "accel_set_options", 00:14:01.334 "params": { 00:14:01.334 "small_cache_size": 128, 00:14:01.334 "large_cache_size": 16, 00:14:01.334 "task_count": 2048, 00:14:01.334 "sequence_count": 2048, 00:14:01.334 "buf_count": 2048 00:14:01.334 } 00:14:01.334 } 00:14:01.334 ] 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "subsystem": "bdev", 00:14:01.334 "config": [ 00:14:01.334 { 00:14:01.334 "method": "bdev_set_options", 00:14:01.334 "params": { 00:14:01.334 "bdev_io_pool_size": 65535, 00:14:01.334 "bdev_io_cache_size": 256, 00:14:01.334 "bdev_auto_examine": true, 00:14:01.334 "iobuf_small_cache_size": 128, 00:14:01.334 "iobuf_large_cache_size": 16 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "bdev_raid_set_options", 00:14:01.334 "params": { 00:14:01.334 "process_window_size_kb": 1024, 00:14:01.334 "process_max_bandwidth_mb_sec": 0 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "bdev_iscsi_set_options", 00:14:01.334 "params": { 00:14:01.334 "timeout_sec": 30 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "bdev_nvme_set_options", 00:14:01.334 "params": { 00:14:01.334 "action_on_timeout": "none", 00:14:01.334 "timeout_us": 0, 00:14:01.334 "timeout_admin_us": 0, 00:14:01.334 "keep_alive_timeout_ms": 10000, 00:14:01.334 "arbitration_burst": 0, 00:14:01.334 "low_priority_weight": 0, 00:14:01.334 "medium_priority_weight": 0, 00:14:01.334 "high_priority_weight": 0, 00:14:01.334 "nvme_adminq_poll_period_us": 10000, 00:14:01.334 "nvme_ioq_poll_period_us": 0, 00:14:01.334 "io_queue_requests": 512, 00:14:01.334 "delay_cmd_submit": true, 00:14:01.334 "transport_retry_count": 4, 00:14:01.334 "bdev_retry_count": 3, 00:14:01.334 "transport_ack_timeout": 0, 00:14:01.334 "ctrlr_loss_timeout_sec": 0, 00:14:01.334 "reconnect_delay_sec": 0, 00:14:01.334 "fast_io_fail_timeout_sec": 0, 00:14:01.334 "disable_auto_failback": false, 00:14:01.334 "generate_uuids": false, 00:14:01.334 "transport_tos": 0, 00:14:01.334 "nvme_error_stat": false, 00:14:01.334 "rdma_srq_size": 0, 00:14:01.334 "io_path_stat": false, 00:14:01.334 "allow_accel_sequence": false, 00:14:01.334 "rdma_max_cq_size": 0, 00:14:01.334 "rdma_cm_event_timeout_ms": 0, 00:14:01.334 "dhchap_digests": [ 00:14:01.334 "sha256", 00:14:01.334 "sha384", 00:14:01.334 "sha512" 00:14:01.334 ], 00:14:01.334 "dhchap_dhgroups": [ 00:14:01.334 "null", 00:14:01.334 "ffdhe2048", 00:14:01.334 "ffdhe3072", 00:14:01.334 "ffdhe4096", 00:14:01.334 "ffdhe6144", 00:14:01.334 "ffdhe8192" 00:14:01.334 ] 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "bdev_nvme_attach_controller", 00:14:01.334 "params": { 00:14:01.334 "name": "nvme0", 00:14:01.334 "trtype": "TCP", 00:14:01.334 "adrfam": "IPv4", 00:14:01.334 "traddr": "10.0.0.3", 00:14:01.334 "trsvcid": "4420", 00:14:01.334 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:01.334 "prchk_reftag": false, 00:14:01.334 "prchk_guard": false, 00:14:01.334 "ctrlr_loss_timeout_sec": 0, 00:14:01.334 "reconnect_delay_sec": 0, 00:14:01.334 "fast_io_fail_timeout_sec": 0, 00:14:01.334 "psk": "key0", 00:14:01.334 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:01.334 "hdgst": false, 00:14:01.334 "ddgst": false, 00:14:01.334 "multipath": "multipath" 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "bdev_nvme_set_hotplug", 00:14:01.334 "params": { 00:14:01.334 "period_us": 100000, 00:14:01.334 "enable": false 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "bdev_enable_histogram", 00:14:01.334 "params": { 00:14:01.334 "name": "nvme0n1", 00:14:01.334 "enable": true 00:14:01.334 } 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "method": "bdev_wait_for_examine" 00:14:01.334 } 00:14:01.334 ] 00:14:01.334 }, 00:14:01.334 { 00:14:01.334 "subsystem": "nbd", 00:14:01.334 "config": [] 00:14:01.334 } 00:14:01.334 ] 00:14:01.334 }' 00:14:01.334 [2024-11-05 09:36:47.271180] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:01.334 [2024-11-05 09:36:47.271704] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72238 ] 00:14:01.593 [2024-11-05 09:36:47.425398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.593 [2024-11-05 09:36:47.466339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.852 [2024-11-05 09:36:47.581127] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:01.852 [2024-11-05 09:36:47.613496] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:02.420 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:02.420 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:14:02.420 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:14:02.420 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:14:02.678 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.679 09:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:02.937 Running I/O for 1 seconds... 00:14:03.871 3652.00 IOPS, 14.27 MiB/s 00:14:03.871 Latency(us) 00:14:03.871 [2024-11-05T09:36:49.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:03.871 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:14:03.871 Verification LBA range: start 0x0 length 0x2000 00:14:03.871 nvme0n1 : 1.03 3680.77 14.38 0.00 0.00 34258.11 8340.95 22043.93 00:14:03.871 [2024-11-05T09:36:49.829Z] =================================================================================================================== 00:14:03.871 [2024-11-05T09:36:49.829Z] Total : 3680.77 14.38 0.00 0.00 34258.11 8340.95 22043.93 00:14:03.871 { 00:14:03.871 "results": [ 00:14:03.871 { 00:14:03.871 "job": "nvme0n1", 00:14:03.871 "core_mask": "0x2", 00:14:03.871 "workload": "verify", 00:14:03.871 "status": "finished", 00:14:03.871 "verify_range": { 00:14:03.871 "start": 0, 00:14:03.871 "length": 8192 00:14:03.871 }, 00:14:03.871 "queue_depth": 128, 00:14:03.871 "io_size": 4096, 00:14:03.871 "runtime": 1.027231, 00:14:03.871 "iops": 3680.7689799081218, 00:14:03.871 "mibps": 14.3780038277661, 00:14:03.871 "io_failed": 0, 00:14:03.871 "io_timeout": 0, 00:14:03.871 "avg_latency_us": 34258.10847154433, 00:14:03.871 "min_latency_us": 8340.945454545454, 00:14:03.871 "max_latency_us": 22043.927272727273 00:14:03.871 } 00:14:03.871 ], 00:14:03.871 "core_count": 1 00:14:03.871 } 00:14:03.871 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:14:03.871 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:14:03.871 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:03.872 nvmf_trace.0 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 72238 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72238 ']' 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72238 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:03.872 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72238 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:04.130 killing process with pid 72238 00:14:04.130 Received shutdown signal, test time was about 1.000000 seconds 00:14:04.130 00:14:04.130 Latency(us) 00:14:04.130 [2024-11-05T09:36:50.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.130 [2024-11-05T09:36:50.088Z] =================================================================================================================== 00:14:04.130 [2024-11-05T09:36:50.088Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72238' 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72238 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72238 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:04.130 09:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:04.130 rmmod nvme_tcp 00:14:04.130 rmmod nvme_fabrics 00:14:04.130 rmmod nvme_keyring 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 72206 ']' 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 72206 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 72206 ']' 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 72206 00:14:04.130 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72206 00:14:04.388 killing process with pid 72206 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72206' 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 72206 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 72206 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:04.388 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:14:04.389 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:04.389 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:04.389 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:04.389 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:04.389 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:04.389 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.RHy2LBnUnY /tmp/tmp.ZeUxGJqE9X /tmp/tmp.Tb7u5KQtIo 00:14:04.647 ************************************ 00:14:04.647 END TEST nvmf_tls 00:14:04.647 ************************************ 00:14:04.647 00:14:04.647 real 1m23.824s 00:14:04.647 user 2m17.218s 00:14:04.647 sys 0m26.283s 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.647 ************************************ 00:14:04.647 START TEST nvmf_fips 00:14:04.647 ************************************ 00:14:04.647 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:14:04.906 * Looking for test storage... 00:14:04.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:04.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.906 --rc genhtml_branch_coverage=1 00:14:04.906 --rc genhtml_function_coverage=1 00:14:04.906 --rc genhtml_legend=1 00:14:04.906 --rc geninfo_all_blocks=1 00:14:04.906 --rc geninfo_unexecuted_blocks=1 00:14:04.906 00:14:04.906 ' 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:04.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.906 --rc genhtml_branch_coverage=1 00:14:04.906 --rc genhtml_function_coverage=1 00:14:04.906 --rc genhtml_legend=1 00:14:04.906 --rc geninfo_all_blocks=1 00:14:04.906 --rc geninfo_unexecuted_blocks=1 00:14:04.906 00:14:04.906 ' 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:04.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.906 --rc genhtml_branch_coverage=1 00:14:04.906 --rc genhtml_function_coverage=1 00:14:04.906 --rc genhtml_legend=1 00:14:04.906 --rc geninfo_all_blocks=1 00:14:04.906 --rc geninfo_unexecuted_blocks=1 00:14:04.906 00:14:04.906 ' 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:04.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.906 --rc genhtml_branch_coverage=1 00:14:04.906 --rc genhtml_function_coverage=1 00:14:04.906 --rc genhtml_legend=1 00:14:04.906 --rc geninfo_all_blocks=1 00:14:04.906 --rc geninfo_unexecuted_blocks=1 00:14:04.906 00:14:04.906 ' 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:04.906 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.907 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:14:04.907 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:14:05.166 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:14:05.167 Error setting digest 00:14:05.167 40B2ED71167F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:14:05.167 40B2ED71167F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.167 09:36:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:05.167 Cannot find device "nvmf_init_br" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:05.167 Cannot find device "nvmf_init_br2" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:05.167 Cannot find device "nvmf_tgt_br" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:05.167 Cannot find device "nvmf_tgt_br2" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:05.167 Cannot find device "nvmf_init_br" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:05.167 Cannot find device "nvmf_init_br2" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:05.167 Cannot find device "nvmf_tgt_br" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:05.167 Cannot find device "nvmf_tgt_br2" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:05.167 Cannot find device "nvmf_br" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:05.167 Cannot find device "nvmf_init_if" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:05.167 Cannot find device "nvmf_init_if2" 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:14:05.167 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:05.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.425 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:14:05.425 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:05.425 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:05.425 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:14:05.425 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:05.425 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:05.425 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:05.425 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:05.426 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:05.426 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.138 ms 00:14:05.426 00:14:05.426 --- 10.0.0.3 ping statistics --- 00:14:05.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.426 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:05.426 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:05.426 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:14:05.426 00:14:05.426 --- 10.0.0.4 ping statistics --- 00:14:05.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.426 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:05.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:14:05.426 00:14:05.426 --- 10.0.0.1 ping statistics --- 00:14:05.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.426 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:05.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:14:05.426 00:14:05.426 --- 10.0.0.2 ping statistics --- 00:14:05.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.426 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=72562 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 72562 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72562 ']' 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:05.426 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:05.684 [2024-11-05 09:36:51.465183] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:05.684 [2024-11-05 09:36:51.465306] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.685 [2024-11-05 09:36:51.615778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.943 [2024-11-05 09:36:51.654404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.943 [2024-11-05 09:36:51.654475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.943 [2024-11-05 09:36:51.654500] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.943 [2024-11-05 09:36:51.654510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.944 [2024-11-05 09:36:51.654518] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.944 [2024-11-05 09:36:51.654894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.944 [2024-11-05 09:36:51.689090] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.RQq 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.RQq 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.RQq 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.RQq 00:14:05.944 09:36:51 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:06.202 [2024-11-05 09:36:52.116175] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.202 [2024-11-05 09:36:52.132125] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:06.202 [2024-11-05 09:36:52.132397] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:06.461 malloc0 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=72596 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 72596 /var/tmp/bdevperf.sock 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 72596 ']' 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.461 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:06.461 [2024-11-05 09:36:52.277420] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:06.461 [2024-11-05 09:36:52.277528] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72596 ] 00:14:06.720 [2024-11-05 09:36:52.428426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:06.720 [2024-11-05 09:36:52.460818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.720 [2024-11-05 09:36:52.489957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:06.720 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.720 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:14:06.720 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.RQq 00:14:06.979 09:36:52 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:07.237 [2024-11-05 09:36:53.036653] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:07.237 TLSTESTn1 00:14:07.237 09:36:53 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:07.496 Running I/O for 10 seconds... 00:14:09.372 3905.00 IOPS, 15.25 MiB/s [2024-11-05T09:36:56.707Z] 3947.50 IOPS, 15.42 MiB/s [2024-11-05T09:36:57.275Z] 3974.67 IOPS, 15.53 MiB/s [2024-11-05T09:36:58.650Z] 3985.00 IOPS, 15.57 MiB/s [2024-11-05T09:36:59.585Z] 4003.40 IOPS, 15.64 MiB/s [2024-11-05T09:37:00.524Z] 4013.33 IOPS, 15.68 MiB/s [2024-11-05T09:37:01.460Z] 4016.57 IOPS, 15.69 MiB/s [2024-11-05T09:37:02.395Z] 4016.88 IOPS, 15.69 MiB/s [2024-11-05T09:37:03.332Z] 4023.11 IOPS, 15.72 MiB/s [2024-11-05T09:37:03.332Z] 4025.10 IOPS, 15.72 MiB/s 00:14:17.374 Latency(us) 00:14:17.374 [2024-11-05T09:37:03.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.374 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:17.374 Verification LBA range: start 0x0 length 0x2000 00:14:17.374 TLSTESTn1 : 10.02 4030.82 15.75 0.00 0.00 31694.83 5838.66 24903.68 00:14:17.374 [2024-11-05T09:37:03.332Z] =================================================================================================================== 00:14:17.374 [2024-11-05T09:37:03.332Z] Total : 4030.82 15.75 0.00 0.00 31694.83 5838.66 24903.68 00:14:17.374 { 00:14:17.374 "results": [ 00:14:17.374 { 00:14:17.374 "job": "TLSTESTn1", 00:14:17.374 "core_mask": "0x4", 00:14:17.374 "workload": "verify", 00:14:17.374 "status": "finished", 00:14:17.374 "verify_range": { 00:14:17.374 "start": 0, 00:14:17.374 "length": 8192 00:14:17.374 }, 00:14:17.374 "queue_depth": 128, 00:14:17.374 "io_size": 4096, 00:14:17.374 "runtime": 10.01608, 00:14:17.375 "iops": 4030.818443942141, 00:14:17.375 "mibps": 15.745384546648989, 00:14:17.375 "io_failed": 0, 00:14:17.375 "io_timeout": 0, 00:14:17.375 "avg_latency_us": 31694.831269322654, 00:14:17.375 "min_latency_us": 5838.6618181818185, 00:14:17.375 "max_latency_us": 24903.68 00:14:17.375 } 00:14:17.375 ], 00:14:17.375 "core_count": 1 00:14:17.375 } 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:14:17.375 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:17.375 nvmf_trace.0 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 72596 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72596 ']' 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72596 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72596 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:14:17.634 killing process with pid 72596 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72596' 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72596 00:14:17.634 Received shutdown signal, test time was about 10.000000 seconds 00:14:17.634 00:14:17.634 Latency(us) 00:14:17.634 [2024-11-05T09:37:03.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:17.634 [2024-11-05T09:37:03.592Z] =================================================================================================================== 00:14:17.634 [2024-11-05T09:37:03.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72596 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:17.634 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:17.893 rmmod nvme_tcp 00:14:17.893 rmmod nvme_fabrics 00:14:17.893 rmmod nvme_keyring 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 72562 ']' 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 72562 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 72562 ']' 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 72562 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72562 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72562' 00:14:17.893 killing process with pid 72562 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 72562 00:14:17.893 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 72562 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:18.152 09:37:03 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.RQq 00:14:18.152 ************************************ 00:14:18.152 END TEST nvmf_fips 00:14:18.152 ************************************ 00:14:18.152 00:14:18.152 real 0m13.518s 00:14:18.152 user 0m18.571s 00:14:18.152 sys 0m5.527s 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:18.152 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:18.412 ************************************ 00:14:18.412 START TEST nvmf_control_msg_list 00:14:18.412 ************************************ 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:14:18.412 * Looking for test storage... 00:14:18.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:14:18.412 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.413 --rc genhtml_branch_coverage=1 00:14:18.413 --rc genhtml_function_coverage=1 00:14:18.413 --rc genhtml_legend=1 00:14:18.413 --rc geninfo_all_blocks=1 00:14:18.413 --rc geninfo_unexecuted_blocks=1 00:14:18.413 00:14:18.413 ' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.413 --rc genhtml_branch_coverage=1 00:14:18.413 --rc genhtml_function_coverage=1 00:14:18.413 --rc genhtml_legend=1 00:14:18.413 --rc geninfo_all_blocks=1 00:14:18.413 --rc geninfo_unexecuted_blocks=1 00:14:18.413 00:14:18.413 ' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.413 --rc genhtml_branch_coverage=1 00:14:18.413 --rc genhtml_function_coverage=1 00:14:18.413 --rc genhtml_legend=1 00:14:18.413 --rc geninfo_all_blocks=1 00:14:18.413 --rc geninfo_unexecuted_blocks=1 00:14:18.413 00:14:18.413 ' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:18.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:18.413 --rc genhtml_branch_coverage=1 00:14:18.413 --rc genhtml_function_coverage=1 00:14:18.413 --rc genhtml_legend=1 00:14:18.413 --rc geninfo_all_blocks=1 00:14:18.413 --rc geninfo_unexecuted_blocks=1 00:14:18.413 00:14:18.413 ' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:18.413 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:18.413 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:18.414 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:18.414 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:18.414 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:18.673 Cannot find device "nvmf_init_br" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:18.673 Cannot find device "nvmf_init_br2" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:18.673 Cannot find device "nvmf_tgt_br" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:18.673 Cannot find device "nvmf_tgt_br2" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:18.673 Cannot find device "nvmf_init_br" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:18.673 Cannot find device "nvmf_init_br2" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:18.673 Cannot find device "nvmf_tgt_br" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:18.673 Cannot find device "nvmf_tgt_br2" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:18.673 Cannot find device "nvmf_br" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:18.673 Cannot find device "nvmf_init_if" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:18.673 Cannot find device "nvmf_init_if2" 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:18.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:18.673 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:18.673 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:18.936 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:18.936 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:14:18.936 00:14:18.936 --- 10.0.0.3 ping statistics --- 00:14:18.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.936 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:18.936 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:18.936 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:14:18.936 00:14:18.936 --- 10.0.0.4 ping statistics --- 00:14:18.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.936 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:18.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:14:18.936 00:14:18.936 --- 10.0.0.1 ping statistics --- 00:14:18.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.936 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:18.936 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.936 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:18.936 00:14:18.936 --- 10.0.0.2 ping statistics --- 00:14:18.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.936 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=72975 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 72975 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 72975 ']' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:18.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:18.936 09:37:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:18.936 [2024-11-05 09:37:04.803062] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:18.936 [2024-11-05 09:37:04.803168] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.205 [2024-11-05 09:37:04.955681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:19.205 [2024-11-05 09:37:04.993739] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.205 [2024-11-05 09:37:04.993832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.205 [2024-11-05 09:37:04.993865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.205 [2024-11-05 09:37:04.993876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.205 [2024-11-05 09:37:04.993885] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.205 [2024-11-05 09:37:04.994229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.205 [2024-11-05 09:37:05.029898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:19.205 [2024-11-05 09:37:05.135525] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:19.205 Malloc0 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.205 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:19.464 [2024-11-05 09:37:05.175102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=73000 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=73001 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=73002 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:19.464 09:37:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 73000 00:14:19.464 [2024-11-05 09:37:05.369721] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:19.464 [2024-11-05 09:37:05.369961] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:19.464 [2024-11-05 09:37:05.370147] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:20.840 Initializing NVMe Controllers 00:14:20.840 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:20.840 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:14:20.840 Initialization complete. Launching workers. 00:14:20.840 ======================================================== 00:14:20.840 Latency(us) 00:14:20.840 Device Information : IOPS MiB/s Average min max 00:14:20.840 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3289.00 12.85 303.73 210.75 484.76 00:14:20.840 ======================================================== 00:14:20.840 Total : 3289.00 12.85 303.73 210.75 484.76 00:14:20.840 00:14:20.840 Initializing NVMe Controllers 00:14:20.840 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:20.840 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:14:20.840 Initialization complete. Launching workers. 00:14:20.840 ======================================================== 00:14:20.840 Latency(us) 00:14:20.840 Device Information : IOPS MiB/s Average min max 00:14:20.840 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3280.00 12.81 304.59 228.36 1642.42 00:14:20.840 ======================================================== 00:14:20.840 Total : 3280.00 12.81 304.59 228.36 1642.42 00:14:20.840 00:14:20.840 Initializing NVMe Controllers 00:14:20.840 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:20.840 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:14:20.840 Initialization complete. Launching workers. 00:14:20.840 ======================================================== 00:14:20.840 Latency(us) 00:14:20.840 Device Information : IOPS MiB/s Average min max 00:14:20.840 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3293.00 12.86 303.33 196.23 482.43 00:14:20.840 ======================================================== 00:14:20.840 Total : 3293.00 12.86 303.33 196.23 482.43 00:14:20.840 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 73001 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 73002 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:20.840 rmmod nvme_tcp 00:14:20.840 rmmod nvme_fabrics 00:14:20.840 rmmod nvme_keyring 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 72975 ']' 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 72975 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 72975 ']' 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 72975 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72975 00:14:20.840 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:20.841 killing process with pid 72975 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72975' 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 72975 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 72975 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:20.841 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.100 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:14:21.100 00:14:21.100 real 0m2.780s 00:14:21.100 user 0m4.682s 00:14:21.101 sys 0m1.324s 00:14:21.101 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:21.101 ************************************ 00:14:21.101 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:14:21.101 END TEST nvmf_control_msg_list 00:14:21.101 ************************************ 00:14:21.101 09:37:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:21.101 09:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:21.101 09:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:21.101 09:37:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:21.101 ************************************ 00:14:21.101 START TEST nvmf_wait_for_buf 00:14:21.101 ************************************ 00:14:21.101 09:37:06 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:14:21.360 * Looking for test storage... 00:14:21.360 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:21.360 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:21.361 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:21.361 --rc genhtml_branch_coverage=1 00:14:21.361 --rc genhtml_function_coverage=1 00:14:21.361 --rc genhtml_legend=1 00:14:21.361 --rc geninfo_all_blocks=1 00:14:21.361 --rc geninfo_unexecuted_blocks=1 00:14:21.361 00:14:21.361 ' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:21.361 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.361 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:21.362 Cannot find device "nvmf_init_br" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:21.362 Cannot find device "nvmf_init_br2" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:21.362 Cannot find device "nvmf_tgt_br" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.362 Cannot find device "nvmf_tgt_br2" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:21.362 Cannot find device "nvmf_init_br" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:21.362 Cannot find device "nvmf_init_br2" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:21.362 Cannot find device "nvmf_tgt_br" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:21.362 Cannot find device "nvmf_tgt_br2" 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:14:21.362 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:21.620 Cannot find device "nvmf_br" 00:14:21.620 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:14:21.620 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:21.620 Cannot find device "nvmf_init_if" 00:14:21.620 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:14:21.620 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:21.620 Cannot find device "nvmf_init_if2" 00:14:21.620 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:14:21.620 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.620 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.621 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:21.621 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:21.879 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.879 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:14:21.879 00:14:21.879 --- 10.0.0.3 ping statistics --- 00:14:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.879 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:21.879 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:21.879 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:14:21.879 00:14:21.879 --- 10.0.0.4 ping statistics --- 00:14:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.879 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.879 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.879 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:14:21.879 00:14:21.879 --- 10.0.0.1 ping statistics --- 00:14:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.879 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:21.879 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.879 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:14:21.879 00:14:21.879 --- 10.0.0.2 ping statistics --- 00:14:21.879 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.879 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=73232 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 73232 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 73232 ']' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:21.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:21.879 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:21.879 [2024-11-05 09:37:07.696202] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:21.879 [2024-11-05 09:37:07.696484] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:22.138 [2024-11-05 09:37:07.848073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.138 [2024-11-05 09:37:07.885871] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:22.138 [2024-11-05 09:37:07.886074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:22.138 [2024-11-05 09:37:07.886237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:22.138 [2024-11-05 09:37:07.886386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:22.138 [2024-11-05 09:37:07.886429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:22.138 [2024-11-05 09:37:07.886890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.138 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:22.138 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:14:22.138 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:22.138 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.138 09:37:07 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.138 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.138 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:14:22.138 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:14:22.138 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:14:22.138 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.138 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.139 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.139 [2024-11-05 09:37:08.074977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.398 Malloc0 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.398 [2024-11-05 09:37:08.124509] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:22.398 [2024-11-05 09:37:08.152591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.398 09:37:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:22.398 [2024-11-05 09:37:08.356026] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:23.773 Initializing NVMe Controllers 00:14:23.773 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:14:23.773 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:14:23.773 Initialization complete. Launching workers. 00:14:23.773 ======================================================== 00:14:23.773 Latency(us) 00:14:23.773 Device Information : IOPS MiB/s Average min max 00:14:23.773 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 496.00 62.00 8097.11 5003.09 15940.05 00:14:23.773 ======================================================== 00:14:23.773 Total : 496.00 62.00 8097.11 5003.09 15940.05 00:14:23.773 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4712 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4712 -eq 0 ]] 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:23.773 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:24.032 rmmod nvme_tcp 00:14:24.032 rmmod nvme_fabrics 00:14:24.032 rmmod nvme_keyring 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 73232 ']' 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 73232 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 73232 ']' 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 73232 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73232 00:14:24.032 killing process with pid 73232 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73232' 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 73232 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 73232 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:24.032 09:37:09 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:24.290 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:14:24.291 00:14:24.291 real 0m3.194s 00:14:24.291 user 0m2.617s 00:14:24.291 sys 0m0.732s 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:24.291 ************************************ 00:14:24.291 END TEST nvmf_wait_for_buf 00:14:24.291 ************************************ 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:24.291 ************************************ 00:14:24.291 START TEST nvmf_nsid 00:14:24.291 ************************************ 00:14:24.291 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:14:24.550 * Looking for test storage... 00:14:24.550 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.550 --rc genhtml_branch_coverage=1 00:14:24.550 --rc genhtml_function_coverage=1 00:14:24.550 --rc genhtml_legend=1 00:14:24.550 --rc geninfo_all_blocks=1 00:14:24.550 --rc geninfo_unexecuted_blocks=1 00:14:24.550 00:14:24.550 ' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.550 --rc genhtml_branch_coverage=1 00:14:24.550 --rc genhtml_function_coverage=1 00:14:24.550 --rc genhtml_legend=1 00:14:24.550 --rc geninfo_all_blocks=1 00:14:24.550 --rc geninfo_unexecuted_blocks=1 00:14:24.550 00:14:24.550 ' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.550 --rc genhtml_branch_coverage=1 00:14:24.550 --rc genhtml_function_coverage=1 00:14:24.550 --rc genhtml_legend=1 00:14:24.550 --rc geninfo_all_blocks=1 00:14:24.550 --rc geninfo_unexecuted_blocks=1 00:14:24.550 00:14:24.550 ' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:24.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.550 --rc genhtml_branch_coverage=1 00:14:24.550 --rc genhtml_function_coverage=1 00:14:24.550 --rc genhtml_legend=1 00:14:24.550 --rc geninfo_all_blocks=1 00:14:24.550 --rc geninfo_unexecuted_blocks=1 00:14:24.550 00:14:24.550 ' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.550 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:24.551 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:24.551 Cannot find device "nvmf_init_br" 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:24.551 Cannot find device "nvmf_init_br2" 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:24.551 Cannot find device "nvmf_tgt_br" 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:14:24.551 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:24.809 Cannot find device "nvmf_tgt_br2" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:24.809 Cannot find device "nvmf_init_br" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:24.809 Cannot find device "nvmf_init_br2" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:24.809 Cannot find device "nvmf_tgt_br" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:24.809 Cannot find device "nvmf_tgt_br2" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:24.809 Cannot find device "nvmf_br" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:24.809 Cannot find device "nvmf_init_if" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:24.809 Cannot find device "nvmf_init_if2" 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:24.809 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.809 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:24.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:24.810 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:25.068 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:25.068 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:25.068 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:25.068 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:25.068 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:25.068 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:25.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:25.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:14:25.069 00:14:25.069 --- 10.0.0.3 ping statistics --- 00:14:25.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.069 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:25.069 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:25.069 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:25.069 00:14:25.069 --- 10.0.0.4 ping statistics --- 00:14:25.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.069 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:25.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:25.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:14:25.069 00:14:25.069 --- 10.0.0.1 ping statistics --- 00:14:25.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.069 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:25.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:25.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:14:25.069 00:14:25.069 --- 10.0.0.2 ping statistics --- 00:14:25.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:25.069 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:25.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=73493 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 73493 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73493 ']' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:25.069 09:37:10 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:25.069 [2024-11-05 09:37:10.946922] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:25.069 [2024-11-05 09:37:10.947232] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.328 [2024-11-05 09:37:11.098640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.328 [2024-11-05 09:37:11.137106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.328 [2024-11-05 09:37:11.137325] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.328 [2024-11-05 09:37:11.137517] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.328 [2024-11-05 09:37:11.137721] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.328 [2024-11-05 09:37:11.137908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.328 [2024-11-05 09:37:11.138388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.328 [2024-11-05 09:37:11.172923] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=73522 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=9b19a62c-7f6b-4e3f-af8b-b41c51d7d087 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=2e5d1686-4930-4c19-a4be-2a9cb1068512 00:14:25.328 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=bd68ba43-676f-43fb-bba0-9f7f483d25a9 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 null0 00:14:25.587 null1 00:14:25.587 null2 00:14:25.587 [2024-11-05 09:37:11.319411] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.587 [2024-11-05 09:37:11.328099] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:25.587 [2024-11-05 09:37:11.328494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73522 ] 00:14:25.587 [2024-11-05 09:37:11.343529] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 73522 /var/tmp/tgt2.sock 00:14:25.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 73522 ']' 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:25.587 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:25.587 [2024-11-05 09:37:11.480487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.587 [2024-11-05 09:37:11.519889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.846 [2024-11-05 09:37:11.566272] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.846 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:25.846 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:14:25.846 09:37:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:14:26.414 [2024-11-05 09:37:12.117414] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:26.414 [2024-11-05 09:37:12.133531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:14:26.414 nvme0n1 nvme0n2 00:14:26.414 nvme1n1 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:14:26.414 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:14:26.415 09:37:12 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 9b19a62c-7f6b-4e3f-af8b-b41c51d7d087 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:27.788 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9b19a62c7f6b4e3faf8bb41c51d7d087 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9B19A62C7F6B4E3FAF8BB41C51D7D087 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 9B19A62C7F6B4E3FAF8BB41C51D7D087 == \9\B\1\9\A\6\2\C\7\F\6\B\4\E\3\F\A\F\8\B\B\4\1\C\5\1\D\7\D\0\8\7 ]] 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 2e5d1686-4930-4c19-a4be-2a9cb1068512 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=2e5d168649304c19a4be2a9cb1068512 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 2E5D168649304C19A4BE2A9CB1068512 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 2E5D168649304C19A4BE2A9CB1068512 == \2\E\5\D\1\6\8\6\4\9\3\0\4\C\1\9\A\4\B\E\2\A\9\C\B\1\0\6\8\5\1\2 ]] 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid bd68ba43-676f-43fb-bba0-9f7f483d25a9 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bd68ba43676f43fbbba09f7f483d25a9 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BD68BA43676F43FBBBA09F7F483D25A9 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ BD68BA43676F43FBBBA09F7F483D25A9 == \B\D\6\8\B\A\4\3\6\7\6\F\4\3\F\B\B\B\A\0\9\F\7\F\4\8\3\D\2\5\A\9 ]] 00:14:27.789 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 73522 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73522 ']' 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73522 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73522 00:14:28.049 killing process with pid 73522 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73522' 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73522 00:14:28.049 09:37:13 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73522 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:28.308 rmmod nvme_tcp 00:14:28.308 rmmod nvme_fabrics 00:14:28.308 rmmod nvme_keyring 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 73493 ']' 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 73493 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 73493 ']' 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 73493 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73493 00:14:28.308 killing process with pid 73493 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73493' 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 73493 00:14:28.308 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 73493 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:28.566 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:14:28.824 00:14:28.824 real 0m4.359s 00:14:28.824 user 0m6.480s 00:14:28.824 sys 0m1.545s 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.824 ************************************ 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:14:28.824 END TEST nvmf_nsid 00:14:28.824 ************************************ 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:28.824 ************************************ 00:14:28.824 END TEST nvmf_target_extra 00:14:28.824 ************************************ 00:14:28.824 00:14:28.824 real 5m9.251s 00:14:28.824 user 10m56.464s 00:14:28.824 sys 1m5.598s 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.824 09:37:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.824 09:37:14 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:28.824 09:37:14 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:28.824 09:37:14 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.824 09:37:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:28.824 ************************************ 00:14:28.824 START TEST nvmf_host 00:14:28.824 ************************************ 00:14:28.824 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:14:28.824 * Looking for test storage... 00:14:28.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:14:28.824 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:28.824 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:28.824 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.083 --rc genhtml_branch_coverage=1 00:14:29.083 --rc genhtml_function_coverage=1 00:14:29.083 --rc genhtml_legend=1 00:14:29.083 --rc geninfo_all_blocks=1 00:14:29.083 --rc geninfo_unexecuted_blocks=1 00:14:29.083 00:14:29.083 ' 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.083 --rc genhtml_branch_coverage=1 00:14:29.083 --rc genhtml_function_coverage=1 00:14:29.083 --rc genhtml_legend=1 00:14:29.083 --rc geninfo_all_blocks=1 00:14:29.083 --rc geninfo_unexecuted_blocks=1 00:14:29.083 00:14:29.083 ' 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.083 --rc genhtml_branch_coverage=1 00:14:29.083 --rc genhtml_function_coverage=1 00:14:29.083 --rc genhtml_legend=1 00:14:29.083 --rc geninfo_all_blocks=1 00:14:29.083 --rc geninfo_unexecuted_blocks=1 00:14:29.083 00:14:29.083 ' 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:29.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.083 --rc genhtml_branch_coverage=1 00:14:29.083 --rc genhtml_function_coverage=1 00:14:29.083 --rc genhtml_legend=1 00:14:29.083 --rc geninfo_all_blocks=1 00:14:29.083 --rc geninfo_unexecuted_blocks=1 00:14:29.083 00:14:29.083 ' 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.083 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:29.084 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:29.084 ************************************ 00:14:29.084 START TEST nvmf_identify 00:14:29.084 ************************************ 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:29.084 * Looking for test storage... 00:14:29.084 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:14:29.084 09:37:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:14:29.343 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:29.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.344 --rc genhtml_branch_coverage=1 00:14:29.344 --rc genhtml_function_coverage=1 00:14:29.344 --rc genhtml_legend=1 00:14:29.344 --rc geninfo_all_blocks=1 00:14:29.344 --rc geninfo_unexecuted_blocks=1 00:14:29.344 00:14:29.344 ' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:29.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.344 --rc genhtml_branch_coverage=1 00:14:29.344 --rc genhtml_function_coverage=1 00:14:29.344 --rc genhtml_legend=1 00:14:29.344 --rc geninfo_all_blocks=1 00:14:29.344 --rc geninfo_unexecuted_blocks=1 00:14:29.344 00:14:29.344 ' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:29.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.344 --rc genhtml_branch_coverage=1 00:14:29.344 --rc genhtml_function_coverage=1 00:14:29.344 --rc genhtml_legend=1 00:14:29.344 --rc geninfo_all_blocks=1 00:14:29.344 --rc geninfo_unexecuted_blocks=1 00:14:29.344 00:14:29.344 ' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:29.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:29.344 --rc genhtml_branch_coverage=1 00:14:29.344 --rc genhtml_function_coverage=1 00:14:29.344 --rc genhtml_legend=1 00:14:29.344 --rc geninfo_all_blocks=1 00:14:29.344 --rc geninfo_unexecuted_blocks=1 00:14:29.344 00:14:29.344 ' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:29.344 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:29.344 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:29.345 Cannot find device "nvmf_init_br" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:29.345 Cannot find device "nvmf_init_br2" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:29.345 Cannot find device "nvmf_tgt_br" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:29.345 Cannot find device "nvmf_tgt_br2" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:29.345 Cannot find device "nvmf_init_br" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:29.345 Cannot find device "nvmf_init_br2" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:29.345 Cannot find device "nvmf_tgt_br" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:29.345 Cannot find device "nvmf_tgt_br2" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:29.345 Cannot find device "nvmf_br" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:29.345 Cannot find device "nvmf_init_if" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:29.345 Cannot find device "nvmf_init_if2" 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:29.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:29.345 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:29.345 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:29.603 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:29.604 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:29.604 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:14:29.604 00:14:29.604 --- 10.0.0.3 ping statistics --- 00:14:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.604 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:29.604 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:29.604 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:14:29.604 00:14:29.604 --- 10.0.0.4 ping statistics --- 00:14:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.604 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:29.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:14:29.604 00:14:29.604 --- 10.0.0.1 ping statistics --- 00:14:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.604 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:29.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:14:29.604 00:14:29.604 --- 10.0.0.2 ping statistics --- 00:14:29.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.604 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=73872 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 73872 00:14:29.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 73872 ']' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:29.604 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:29.862 [2024-11-05 09:37:15.608021] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:29.862 [2024-11-05 09:37:15.608348] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.862 [2024-11-05 09:37:15.760546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.862 [2024-11-05 09:37:15.802575] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.862 [2024-11-05 09:37:15.802860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.862 [2024-11-05 09:37:15.803023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.862 [2024-11-05 09:37:15.803042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.862 [2024-11-05 09:37:15.803051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.862 [2024-11-05 09:37:15.804898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.863 [2024-11-05 09:37:15.805396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.863 [2024-11-05 09:37:15.805557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.863 [2024-11-05 09:37:15.805566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.121 [2024-11-05 09:37:15.839106] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 [2024-11-05 09:37:15.901795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 Malloc0 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 09:37:15 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 [2024-11-05 09:37:16.002129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.121 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.121 [ 00:14:30.122 { 00:14:30.122 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:30.122 "subtype": "Discovery", 00:14:30.122 "listen_addresses": [ 00:14:30.122 { 00:14:30.122 "trtype": "TCP", 00:14:30.122 "adrfam": "IPv4", 00:14:30.122 "traddr": "10.0.0.3", 00:14:30.122 "trsvcid": "4420" 00:14:30.122 } 00:14:30.122 ], 00:14:30.122 "allow_any_host": true, 00:14:30.122 "hosts": [] 00:14:30.122 }, 00:14:30.122 { 00:14:30.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:30.122 "subtype": "NVMe", 00:14:30.122 "listen_addresses": [ 00:14:30.122 { 00:14:30.122 "trtype": "TCP", 00:14:30.122 "adrfam": "IPv4", 00:14:30.122 "traddr": "10.0.0.3", 00:14:30.122 "trsvcid": "4420" 00:14:30.122 } 00:14:30.122 ], 00:14:30.122 "allow_any_host": true, 00:14:30.122 "hosts": [], 00:14:30.122 "serial_number": "SPDK00000000000001", 00:14:30.122 "model_number": "SPDK bdev Controller", 00:14:30.122 "max_namespaces": 32, 00:14:30.122 "min_cntlid": 1, 00:14:30.122 "max_cntlid": 65519, 00:14:30.122 "namespaces": [ 00:14:30.122 { 00:14:30.122 "nsid": 1, 00:14:30.122 "bdev_name": "Malloc0", 00:14:30.122 "name": "Malloc0", 00:14:30.122 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:30.122 "eui64": "ABCDEF0123456789", 00:14:30.122 "uuid": "b9bd1874-6104-4032-9849-636de9a3f8fc" 00:14:30.122 } 00:14:30.122 ] 00:14:30.122 } 00:14:30.122 ] 00:14:30.122 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.122 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:30.122 [2024-11-05 09:37:16.059984] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:30.122 [2024-11-05 09:37:16.060183] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73899 ] 00:14:30.383 [2024-11-05 09:37:16.222381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:14:30.383 [2024-11-05 09:37:16.222446] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:30.383 [2024-11-05 09:37:16.222454] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:30.383 [2024-11-05 09:37:16.222466] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:30.383 [2024-11-05 09:37:16.222476] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:30.383 [2024-11-05 09:37:16.222789] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:14:30.383 [2024-11-05 09:37:16.222898] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2312750 0 00:14:30.383 [2024-11-05 09:37:16.234880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:30.383 [2024-11-05 09:37:16.234912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:30.383 [2024-11-05 09:37:16.234920] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:30.383 [2024-11-05 09:37:16.234925] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:30.384 [2024-11-05 09:37:16.234958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.234966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.234971] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.384 [2024-11-05 09:37:16.234987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:30.384 [2024-11-05 09:37:16.235021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.384 [2024-11-05 09:37:16.244869] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.384 [2024-11-05 09:37:16.244906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.384 [2024-11-05 09:37:16.244913] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.244918] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.384 [2024-11-05 09:37:16.244931] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:30.384 [2024-11-05 09:37:16.244940] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:14:30.384 [2024-11-05 09:37:16.244947] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:14:30.384 [2024-11-05 09:37:16.244965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.244972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.244976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.384 [2024-11-05 09:37:16.244986] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.384 [2024-11-05 09:37:16.245019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.384 [2024-11-05 09:37:16.245088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.384 [2024-11-05 09:37:16.245096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.384 [2024-11-05 09:37:16.245100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.384 [2024-11-05 09:37:16.245111] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:14:30.384 [2024-11-05 09:37:16.245119] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:14:30.384 [2024-11-05 09:37:16.245128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.384 [2024-11-05 09:37:16.245145] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.384 [2024-11-05 09:37:16.245165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.384 [2024-11-05 09:37:16.245211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.384 [2024-11-05 09:37:16.245218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.384 [2024-11-05 09:37:16.245222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.384 [2024-11-05 09:37:16.245234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:14:30.384 [2024-11-05 09:37:16.245244] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:30.384 [2024-11-05 09:37:16.245252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245256] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.384 [2024-11-05 09:37:16.245268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.384 [2024-11-05 09:37:16.245287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.384 [2024-11-05 09:37:16.245330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.384 [2024-11-05 09:37:16.245337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.384 [2024-11-05 09:37:16.245341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245345] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.384 [2024-11-05 09:37:16.245352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:30.384 [2024-11-05 09:37:16.245363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245367] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245371] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.384 [2024-11-05 09:37:16.245379] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.384 [2024-11-05 09:37:16.245407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.384 [2024-11-05 09:37:16.245453] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.384 [2024-11-05 09:37:16.245460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.384 [2024-11-05 09:37:16.245464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.384 [2024-11-05 09:37:16.245474] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:30.384 [2024-11-05 09:37:16.245480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:30.384 [2024-11-05 09:37:16.245488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:30.384 [2024-11-05 09:37:16.245600] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:14:30.384 [2024-11-05 09:37:16.245606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:30.384 [2024-11-05 09:37:16.245617] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245622] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.384 [2024-11-05 09:37:16.245634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.384 [2024-11-05 09:37:16.245654] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.384 [2024-11-05 09:37:16.245703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.384 [2024-11-05 09:37:16.245710] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.384 [2024-11-05 09:37:16.245714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.384 [2024-11-05 09:37:16.245724] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:30.384 [2024-11-05 09:37:16.245735] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245740] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.384 [2024-11-05 09:37:16.245744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.384 [2024-11-05 09:37:16.245752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.384 [2024-11-05 09:37:16.245770] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.385 [2024-11-05 09:37:16.245816] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.385 [2024-11-05 09:37:16.245823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.385 [2024-11-05 09:37:16.245827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.245831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.385 [2024-11-05 09:37:16.245853] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:30.385 [2024-11-05 09:37:16.245860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:30.385 [2024-11-05 09:37:16.245870] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:14:30.385 [2024-11-05 09:37:16.245886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:30.385 [2024-11-05 09:37:16.245898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.245902] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.385 [2024-11-05 09:37:16.245911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.385 [2024-11-05 09:37:16.245933] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.385 [2024-11-05 09:37:16.246025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.385 [2024-11-05 09:37:16.246033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.385 [2024-11-05 09:37:16.246038] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246042] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2312750): datao=0, datal=4096, cccid=0 00:14:30.385 [2024-11-05 09:37:16.246047] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2376740) on tqpair(0x2312750): expected_datao=0, payload_size=4096 00:14:30.385 [2024-11-05 09:37:16.246052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246061] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246066] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.385 [2024-11-05 09:37:16.246081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.385 [2024-11-05 09:37:16.246085] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.385 [2024-11-05 09:37:16.246099] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:14:30.385 [2024-11-05 09:37:16.246105] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:14:30.385 [2024-11-05 09:37:16.246111] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:14:30.385 [2024-11-05 09:37:16.246116] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:14:30.385 [2024-11-05 09:37:16.246122] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:14:30.385 [2024-11-05 09:37:16.246127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:14:30.385 [2024-11-05 09:37:16.246141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:30.385 [2024-11-05 09:37:16.246152] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246158] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246162] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.385 [2024-11-05 09:37:16.246170] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:30.385 [2024-11-05 09:37:16.246191] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.385 [2024-11-05 09:37:16.246247] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.385 [2024-11-05 09:37:16.246254] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.385 [2024-11-05 09:37:16.246258] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246262] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.385 [2024-11-05 09:37:16.246271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2312750) 00:14:30.385 [2024-11-05 09:37:16.246286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.385 [2024-11-05 09:37:16.246293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246297] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246301] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2312750) 00:14:30.385 [2024-11-05 09:37:16.246307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.385 [2024-11-05 09:37:16.246314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246322] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2312750) 00:14:30.385 [2024-11-05 09:37:16.246328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.385 [2024-11-05 09:37:16.246335] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246343] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.385 [2024-11-05 09:37:16.246349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.385 [2024-11-05 09:37:16.246355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:30.385 [2024-11-05 09:37:16.246368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:30.385 [2024-11-05 09:37:16.246377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.385 [2024-11-05 09:37:16.246381] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2312750) 00:14:30.385 [2024-11-05 09:37:16.246388] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.385 [2024-11-05 09:37:16.246409] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376740, cid 0, qid 0 00:14:30.385 [2024-11-05 09:37:16.246417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x23768c0, cid 1, qid 0 00:14:30.386 [2024-11-05 09:37:16.246422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376a40, cid 2, qid 0 00:14:30.386 [2024-11-05 09:37:16.246427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.386 [2024-11-05 09:37:16.246432] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376d40, cid 4, qid 0 00:14:30.386 [2024-11-05 09:37:16.246520] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.386 [2024-11-05 09:37:16.246528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.386 [2024-11-05 09:37:16.246532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.386 [2024-11-05 09:37:16.246536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376d40) on tqpair=0x2312750 00:14:30.386 [2024-11-05 09:37:16.246542] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:14:30.386 [2024-11-05 09:37:16.246548] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:14:30.386 [2024-11-05 09:37:16.246560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.386 [2024-11-05 09:37:16.246565] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2312750) 00:14:30.386 [2024-11-05 09:37:16.246573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.386 [2024-11-05 09:37:16.246592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376d40, cid 4, qid 0 00:14:30.386 [2024-11-05 09:37:16.246653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.386 [2024-11-05 09:37:16.246661] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.386 [2024-11-05 09:37:16.246665] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.386 [2024-11-05 09:37:16.246669] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2312750): datao=0, datal=4096, cccid=4 00:14:30.386 [2024-11-05 09:37:16.246674] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2376d40) on tqpair(0x2312750): expected_datao=0, payload_size=4096 00:14:30.386 [2024-11-05 09:37:16.246679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.386 [2024-11-05 09:37:16.246687] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.386 [2024-11-05 09:37:16.246691] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.386 [2024-11-05 09:37:16.246700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.386 [2024-11-05 09:37:16.246706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.386 [2024-11-05 09:37:16.246710] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.386 [2024-11-05 09:37:16.246714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376d40) on tqpair=0x2312750 00:14:30.386 [2024-11-05 09:37:16.246729] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:14:30.387 [2024-11-05 09:37:16.246762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.246769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2312750) 00:14:30.387 [2024-11-05 09:37:16.246777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.387 [2024-11-05 09:37:16.246785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.246789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.246793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2312750) 00:14:30.387 [2024-11-05 09:37:16.246800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.387 [2024-11-05 09:37:16.246825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376d40, cid 4, qid 0 00:14:30.387 [2024-11-05 09:37:16.246833] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376ec0, cid 5, qid 0 00:14:30.387 [2024-11-05 09:37:16.246953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.387 [2024-11-05 09:37:16.246961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.387 [2024-11-05 09:37:16.246965] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.246969] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2312750): datao=0, datal=1024, cccid=4 00:14:30.387 [2024-11-05 09:37:16.246974] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2376d40) on tqpair(0x2312750): expected_datao=0, payload_size=1024 00:14:30.387 [2024-11-05 09:37:16.246979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.246987] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.246991] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.246997] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.387 [2024-11-05 09:37:16.247003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.387 [2024-11-05 09:37:16.247007] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376ec0) on tqpair=0x2312750 00:14:30.387 [2024-11-05 09:37:16.247031] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.387 [2024-11-05 09:37:16.247040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.387 [2024-11-05 09:37:16.247044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376d40) on tqpair=0x2312750 00:14:30.387 [2024-11-05 09:37:16.247061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2312750) 00:14:30.387 [2024-11-05 09:37:16.247074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.387 [2024-11-05 09:37:16.247101] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376d40, cid 4, qid 0 00:14:30.387 [2024-11-05 09:37:16.247170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.387 [2024-11-05 09:37:16.247177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.387 [2024-11-05 09:37:16.247181] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247185] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2312750): datao=0, datal=3072, cccid=4 00:14:30.387 [2024-11-05 09:37:16.247190] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2376d40) on tqpair(0x2312750): expected_datao=0, payload_size=3072 00:14:30.387 [2024-11-05 09:37:16.247195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247202] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247206] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247215] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.387 [2024-11-05 09:37:16.247221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.387 [2024-11-05 09:37:16.247225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247229] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376d40) on tqpair=0x2312750 00:14:30.387 [2024-11-05 09:37:16.247240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.387 [2024-11-05 09:37:16.247245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2312750) 00:14:30.387 [2024-11-05 09:37:16.247253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.387 [2024-11-05 09:37:16.247276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376d40, cid 4, qid 0 00:14:30.387 ===================================================== 00:14:30.387 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:30.387 ===================================================== 00:14:30.387 Controller Capabilities/Features 00:14:30.387 ================================ 00:14:30.387 Vendor ID: 0000 00:14:30.387 Subsystem Vendor ID: 0000 00:14:30.387 Serial Number: .................... 00:14:30.387 Model Number: ........................................ 00:14:30.387 Firmware Version: 25.01 00:14:30.387 Recommended Arb Burst: 0 00:14:30.387 IEEE OUI Identifier: 00 00 00 00:14:30.387 Multi-path I/O 00:14:30.387 May have multiple subsystem ports: No 00:14:30.387 May have multiple controllers: No 00:14:30.388 Associated with SR-IOV VF: No 00:14:30.388 Max Data Transfer Size: 131072 00:14:30.388 Max Number of Namespaces: 0 00:14:30.388 Max Number of I/O Queues: 1024 00:14:30.388 NVMe Specification Version (VS): 1.3 00:14:30.388 NVMe Specification Version (Identify): 1.3 00:14:30.388 Maximum Queue Entries: 128 00:14:30.388 Contiguous Queues Required: Yes 00:14:30.388 Arbitration Mechanisms Supported 00:14:30.388 Weighted Round Robin: Not Supported 00:14:30.388 Vendor Specific: Not Supported 00:14:30.388 Reset Timeout: 15000 ms 00:14:30.388 Doorbell Stride: 4 bytes 00:14:30.388 NVM Subsystem Reset: Not Supported 00:14:30.388 Command Sets Supported 00:14:30.388 NVM Command Set: Supported 00:14:30.388 Boot Partition: Not Supported 00:14:30.388 Memory Page Size Minimum: 4096 bytes 00:14:30.388 Memory Page Size Maximum: 4096 bytes 00:14:30.388 Persistent Memory Region: Not Supported 00:14:30.388 Optional Asynchronous Events Supported 00:14:30.388 Namespace Attribute Notices: Not Supported 00:14:30.388 Firmware Activation Notices: Not Supported 00:14:30.388 ANA Change Notices: Not Supported 00:14:30.388 PLE Aggregate Log Change Notices: Not Supported 00:14:30.388 LBA Status Info Alert Notices: Not Supported 00:14:30.388 EGE Aggregate Log Change Notices: Not Supported 00:14:30.388 Normal NVM Subsystem Shutdown event: Not Supported 00:14:30.388 Zone Descriptor Change Notices: Not Supported 00:14:30.388 Discovery Log Change Notices: Supported 00:14:30.388 Controller Attributes 00:14:30.388 128-bit Host Identifier: Not Supported 00:14:30.388 Non-Operational Permissive Mode: Not Supported 00:14:30.388 NVM Sets: Not Supported 00:14:30.388 Read Recovery Levels: Not Supported 00:14:30.388 Endurance Groups: Not Supported 00:14:30.388 Predictable Latency Mode: Not Supported 00:14:30.388 Traffic Based Keep ALive: Not Supported 00:14:30.388 Namespace Granularity: Not Supported 00:14:30.388 SQ Associations: Not Supported 00:14:30.388 UUID List: Not Supported 00:14:30.388 Multi-Domain Subsystem: Not Supported 00:14:30.388 Fixed Capacity Management: Not Supported 00:14:30.388 Variable Capacity Management: Not Supported 00:14:30.388 Delete Endurance Group: Not Supported 00:14:30.388 Delete NVM Set: Not Supported 00:14:30.388 Extended LBA Formats Supported: Not Supported 00:14:30.388 Flexible Data Placement Supported: Not Supported 00:14:30.388 00:14:30.388 Controller Memory Buffer Support 00:14:30.388 ================================ 00:14:30.388 Supported: No 00:14:30.388 00:14:30.388 Persistent Memory Region Support 00:14:30.388 ================================ 00:14:30.388 Supported: No 00:14:30.388 00:14:30.388 Admin Command Set Attributes 00:14:30.388 ============================ 00:14:30.388 Security Send/Receive: Not Supported 00:14:30.388 Format NVM: Not Supported 00:14:30.388 Firmware Activate/Download: Not Supported 00:14:30.388 Namespace Management: Not Supported 00:14:30.388 Device Self-Test: Not Supported 00:14:30.388 Directives: Not Supported 00:14:30.388 NVMe-MI: Not Supported 00:14:30.388 Virtualization Management: Not Supported 00:14:30.388 Doorbell Buffer Config: Not Supported 00:14:30.388 Get LBA Status Capability: Not Supported 00:14:30.388 Command & Feature Lockdown Capability: Not Supported 00:14:30.388 Abort Command Limit: 1 00:14:30.388 Async Event Request Limit: 4 00:14:30.388 Number of Firmware Slots: N/A 00:14:30.388 Firmware Slot 1 Read-Only: N/A 00:14:30.388 Firm[2024-11-05 09:37:16.247343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.388 [2024-11-05 09:37:16.247350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.388 [2024-11-05 09:37:16.247354] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.388 [2024-11-05 09:37:16.247358] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2312750): datao=0, datal=8, cccid=4 00:14:30.388 [2024-11-05 09:37:16.247363] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2376d40) on tqpair(0x2312750): expected_datao=0, payload_size=8 00:14:30.388 [2024-11-05 09:37:16.247368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.388 [2024-11-05 09:37:16.247376] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.388 [2024-11-05 09:37:16.247380] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.388 [2024-11-05 09:37:16.247395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.388 [2024-11-05 09:37:16.247402] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.388 [2024-11-05 09:37:16.247406] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.388 [2024-11-05 09:37:16.247411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376d40) on tqpair=0x2312750 00:14:30.388 ware Activation Without Reset: N/A 00:14:30.388 Multiple Update Detection Support: N/A 00:14:30.388 Firmware Update Granularity: No Information Provided 00:14:30.388 Per-Namespace SMART Log: No 00:14:30.388 Asymmetric Namespace Access Log Page: Not Supported 00:14:30.388 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:30.388 Command Effects Log Page: Not Supported 00:14:30.388 Get Log Page Extended Data: Supported 00:14:30.388 Telemetry Log Pages: Not Supported 00:14:30.388 Persistent Event Log Pages: Not Supported 00:14:30.388 Supported Log Pages Log Page: May Support 00:14:30.388 Commands Supported & Effects Log Page: Not Supported 00:14:30.388 Feature Identifiers & Effects Log Page:May Support 00:14:30.388 NVMe-MI Commands & Effects Log Page: May Support 00:14:30.388 Data Area 4 for Telemetry Log: Not Supported 00:14:30.388 Error Log Page Entries Supported: 128 00:14:30.388 Keep Alive: Not Supported 00:14:30.388 00:14:30.388 NVM Command Set Attributes 00:14:30.388 ========================== 00:14:30.388 Submission Queue Entry Size 00:14:30.388 Max: 1 00:14:30.388 Min: 1 00:14:30.388 Completion Queue Entry Size 00:14:30.388 Max: 1 00:14:30.388 Min: 1 00:14:30.388 Number of Namespaces: 0 00:14:30.388 Compare Command: Not Supported 00:14:30.388 Write Uncorrectable Command: Not Supported 00:14:30.388 Dataset Management Command: Not Supported 00:14:30.388 Write Zeroes Command: Not Supported 00:14:30.388 Set Features Save Field: Not Supported 00:14:30.388 Reservations: Not Supported 00:14:30.388 Timestamp: Not Supported 00:14:30.388 Copy: Not Supported 00:14:30.388 Volatile Write Cache: Not Present 00:14:30.388 Atomic Write Unit (Normal): 1 00:14:30.389 Atomic Write Unit (PFail): 1 00:14:30.389 Atomic Compare & Write Unit: 1 00:14:30.389 Fused Compare & Write: Supported 00:14:30.389 Scatter-Gather List 00:14:30.389 SGL Command Set: Supported 00:14:30.389 SGL Keyed: Supported 00:14:30.389 SGL Bit Bucket Descriptor: Not Supported 00:14:30.389 SGL Metadata Pointer: Not Supported 00:14:30.389 Oversized SGL: Not Supported 00:14:30.389 SGL Metadata Address: Not Supported 00:14:30.389 SGL Offset: Supported 00:14:30.389 Transport SGL Data Block: Not Supported 00:14:30.389 Replay Protected Memory Block: Not Supported 00:14:30.389 00:14:30.389 Firmware Slot Information 00:14:30.389 ========================= 00:14:30.389 Active slot: 0 00:14:30.389 00:14:30.389 00:14:30.389 Error Log 00:14:30.389 ========= 00:14:30.389 00:14:30.389 Active Namespaces 00:14:30.389 ================= 00:14:30.389 Discovery Log Page 00:14:30.389 ================== 00:14:30.389 Generation Counter: 2 00:14:30.389 Number of Records: 2 00:14:30.389 Record Format: 0 00:14:30.389 00:14:30.389 Discovery Log Entry 0 00:14:30.389 ---------------------- 00:14:30.389 Transport Type: 3 (TCP) 00:14:30.389 Address Family: 1 (IPv4) 00:14:30.389 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:30.389 Entry Flags: 00:14:30.389 Duplicate Returned Information: 1 00:14:30.389 Explicit Persistent Connection Support for Discovery: 1 00:14:30.389 Transport Requirements: 00:14:30.389 Secure Channel: Not Required 00:14:30.389 Port ID: 0 (0x0000) 00:14:30.389 Controller ID: 65535 (0xffff) 00:14:30.389 Admin Max SQ Size: 128 00:14:30.389 Transport Service Identifier: 4420 00:14:30.389 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:30.389 Transport Address: 10.0.0.3 00:14:30.389 Discovery Log Entry 1 00:14:30.389 ---------------------- 00:14:30.389 Transport Type: 3 (TCP) 00:14:30.389 Address Family: 1 (IPv4) 00:14:30.389 Subsystem Type: 2 (NVM Subsystem) 00:14:30.389 Entry Flags: 00:14:30.389 Duplicate Returned Information: 0 00:14:30.389 Explicit Persistent Connection Support for Discovery: 0 00:14:30.389 Transport Requirements: 00:14:30.389 Secure Channel: Not Required 00:14:30.389 Port ID: 0 (0x0000) 00:14:30.389 Controller ID: 65535 (0xffff) 00:14:30.389 Admin Max SQ Size: 128 00:14:30.389 Transport Service Identifier: 4420 00:14:30.389 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:30.389 Transport Address: 10.0.0.3 [2024-11-05 09:37:16.247502] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:14:30.389 [2024-11-05 09:37:16.247517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376740) on tqpair=0x2312750 00:14:30.389 [2024-11-05 09:37:16.247524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.389 [2024-11-05 09:37:16.247530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x23768c0) on tqpair=0x2312750 00:14:30.389 [2024-11-05 09:37:16.247535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.389 [2024-11-05 09:37:16.247541] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376a40) on tqpair=0x2312750 00:14:30.389 [2024-11-05 09:37:16.247546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.389 [2024-11-05 09:37:16.247551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.389 [2024-11-05 09:37:16.247556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.389 [2024-11-05 09:37:16.247566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247571] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.389 [2024-11-05 09:37:16.247583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.389 [2024-11-05 09:37:16.247606] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.389 [2024-11-05 09:37:16.247651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.389 [2024-11-05 09:37:16.247659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.389 [2024-11-05 09:37:16.247663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.389 [2024-11-05 09:37:16.247675] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.389 [2024-11-05 09:37:16.247692] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.389 [2024-11-05 09:37:16.247714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.389 [2024-11-05 09:37:16.247777] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.389 [2024-11-05 09:37:16.247785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.389 [2024-11-05 09:37:16.247788] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247793] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.389 [2024-11-05 09:37:16.247799] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:14:30.389 [2024-11-05 09:37:16.247804] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:14:30.389 [2024-11-05 09:37:16.247815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247824] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.389 [2024-11-05 09:37:16.247832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.389 [2024-11-05 09:37:16.247868] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.389 [2024-11-05 09:37:16.247919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.389 [2024-11-05 09:37:16.247926] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.389 [2024-11-05 09:37:16.247930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.389 [2024-11-05 09:37:16.247947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.389 [2024-11-05 09:37:16.247956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.389 [2024-11-05 09:37:16.247964] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.389 [2024-11-05 09:37:16.247982] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.389 [2024-11-05 09:37:16.248028] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248035] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248039] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248043] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248063] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.248088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.248134] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248149] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248160] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248164] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248168] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248176] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.248193] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.248239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248246] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248265] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248282] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.248299] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.248348] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248354] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248363] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248390] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.248407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.248454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248465] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248469] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.248514] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.248557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.248617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.248666] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248672] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248676] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248691] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248696] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248700] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.248725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.248771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.248778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.248782] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248786] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.248797] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248805] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.248812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2312750) 00:14:30.390 [2024-11-05 09:37:16.248823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.390 [2024-11-05 09:37:16.252872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2376bc0, cid 3, qid 0 00:14:30.390 [2024-11-05 09:37:16.252934] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.390 [2024-11-05 09:37:16.252944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.390 [2024-11-05 09:37:16.252949] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.390 [2024-11-05 09:37:16.252953] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2376bc0) on tqpair=0x2312750 00:14:30.390 [2024-11-05 09:37:16.252963] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:14:30.390 00:14:30.390 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:30.390 [2024-11-05 09:37:16.303930] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:30.390 [2024-11-05 09:37:16.304025] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73907 ] 00:14:30.652 [2024-11-05 09:37:16.474202] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:14:30.652 [2024-11-05 09:37:16.474275] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:30.652 [2024-11-05 09:37:16.474283] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:30.652 [2024-11-05 09:37:16.474297] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:30.652 [2024-11-05 09:37:16.474308] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:14:30.652 [2024-11-05 09:37:16.474635] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:14:30.652 [2024-11-05 09:37:16.474711] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1a1f750 0 00:14:30.653 [2024-11-05 09:37:16.488858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:30.653 [2024-11-05 09:37:16.488901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:30.653 [2024-11-05 09:37:16.488910] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:30.653 [2024-11-05 09:37:16.488915] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:30.653 [2024-11-05 09:37:16.488946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.488953] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.488958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.488973] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:30.653 [2024-11-05 09:37:16.489008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.494858] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.494884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.653 [2024-11-05 09:37:16.494890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.494895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.653 [2024-11-05 09:37:16.494908] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:30.653 [2024-11-05 09:37:16.494917] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:14:30.653 [2024-11-05 09:37:16.494924] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:14:30.653 [2024-11-05 09:37:16.494942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.494948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.494953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.494963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.653 [2024-11-05 09:37:16.494994] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.495060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.495068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.653 [2024-11-05 09:37:16.495072] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495077] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.653 [2024-11-05 09:37:16.495083] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:14:30.653 [2024-11-05 09:37:16.495092] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:14:30.653 [2024-11-05 09:37:16.495101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495105] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.495118] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.653 [2024-11-05 09:37:16.495138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.495546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.495563] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.653 [2024-11-05 09:37:16.495568] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.653 [2024-11-05 09:37:16.495579] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:14:30.653 [2024-11-05 09:37:16.495589] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:14:30.653 [2024-11-05 09:37:16.495598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495607] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.495615] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.653 [2024-11-05 09:37:16.495636] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.495689] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.495697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.653 [2024-11-05 09:37:16.495701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.653 [2024-11-05 09:37:16.495711] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:30.653 [2024-11-05 09:37:16.495723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495728] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.495732] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.495740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.653 [2024-11-05 09:37:16.495758] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.495988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.496001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.653 [2024-11-05 09:37:16.496005] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496010] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.653 [2024-11-05 09:37:16.496016] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:14:30.653 [2024-11-05 09:37:16.496023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:14:30.653 [2024-11-05 09:37:16.496032] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:30.653 [2024-11-05 09:37:16.496144] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:14:30.653 [2024-11-05 09:37:16.496151] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:30.653 [2024-11-05 09:37:16.496162] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496167] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.496179] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.653 [2024-11-05 09:37:16.496202] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.496525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.496541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.653 [2024-11-05 09:37:16.496546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496551] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.653 [2024-11-05 09:37:16.496557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:30.653 [2024-11-05 09:37:16.496569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496579] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.496587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.653 [2024-11-05 09:37:16.496608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.496668] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.496675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.653 [2024-11-05 09:37:16.496679] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.653 [2024-11-05 09:37:16.496689] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:30.653 [2024-11-05 09:37:16.496694] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:14:30.653 [2024-11-05 09:37:16.496704] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:14:30.653 [2024-11-05 09:37:16.496720] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:14:30.653 [2024-11-05 09:37:16.496732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.496737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.653 [2024-11-05 09:37:16.496746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.653 [2024-11-05 09:37:16.496766] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.653 [2024-11-05 09:37:16.497365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.653 [2024-11-05 09:37:16.497384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.653 [2024-11-05 09:37:16.497390] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.497394] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=4096, cccid=0 00:14:30.653 [2024-11-05 09:37:16.497400] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a83740) on tqpair(0x1a1f750): expected_datao=0, payload_size=4096 00:14:30.653 [2024-11-05 09:37:16.497405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.497414] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.497420] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.653 [2024-11-05 09:37:16.497430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.653 [2024-11-05 09:37:16.497437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.654 [2024-11-05 09:37:16.497440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.654 [2024-11-05 09:37:16.497455] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:14:30.654 [2024-11-05 09:37:16.497461] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:14:30.654 [2024-11-05 09:37:16.497466] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:14:30.654 [2024-11-05 09:37:16.497471] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:14:30.654 [2024-11-05 09:37:16.497476] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:14:30.654 [2024-11-05 09:37:16.497482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.497498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.497510] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497515] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.497529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:30.654 [2024-11-05 09:37:16.497554] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.654 [2024-11-05 09:37:16.497680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.654 [2024-11-05 09:37:16.497688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.654 [2024-11-05 09:37:16.497692] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.654 [2024-11-05 09:37:16.497705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497710] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.497722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.654 [2024-11-05 09:37:16.497729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.497743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.654 [2024-11-05 09:37:16.497750] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497754] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497758] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.497765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.654 [2024-11-05 09:37:16.497771] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.497786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.654 [2024-11-05 09:37:16.497792] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.497811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.497825] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.497833] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.497866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-11-05 09:37:16.497894] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83740, cid 0, qid 0 00:14:30.654 [2024-11-05 09:37:16.497903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a838c0, cid 1, qid 0 00:14:30.654 [2024-11-05 09:37:16.497908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83a40, cid 2, qid 0 00:14:30.654 [2024-11-05 09:37:16.497913] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83bc0, cid 3, qid 0 00:14:30.654 [2024-11-05 09:37:16.497919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83d40, cid 4, qid 0 00:14:30.654 [2024-11-05 09:37:16.498415] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.654 [2024-11-05 09:37:16.498433] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.654 [2024-11-05 09:37:16.498438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.498442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83d40) on tqpair=0x1a1f750 00:14:30.654 [2024-11-05 09:37:16.498449] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:14:30.654 [2024-11-05 09:37:16.498455] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.498465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.498478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.498486] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.498491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.498495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.498504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:30.654 [2024-11-05 09:37:16.498525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83d40, cid 4, qid 0 00:14:30.654 [2024-11-05 09:37:16.498675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.654 [2024-11-05 09:37:16.498683] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.654 [2024-11-05 09:37:16.498687] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.498691] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83d40) on tqpair=0x1a1f750 00:14:30.654 [2024-11-05 09:37:16.498761] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.498783] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:30.654 [2024-11-05 09:37:16.498794] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.498798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1f750) 00:14:30.654 [2024-11-05 09:37:16.498807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.654 [2024-11-05 09:37:16.498829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83d40, cid 4, qid 0 00:14:30.654 [2024-11-05 09:37:16.499347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.654 [2024-11-05 09:37:16.499357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.654 [2024-11-05 09:37:16.499361] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.499365] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=4096, cccid=4 00:14:30.654 [2024-11-05 09:37:16.499370] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a83d40) on tqpair(0x1a1f750): expected_datao=0, payload_size=4096 00:14:30.654 [2024-11-05 09:37:16.499375] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.499384] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.499389] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.499398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.654 [2024-11-05 09:37:16.499404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.654 [2024-11-05 09:37:16.499408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.654 [2024-11-05 09:37:16.499413] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83d40) on tqpair=0x1a1f750 00:14:30.654 [2024-11-05 09:37:16.499430] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:14:30.654 [2024-11-05 09:37:16.499444] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499457] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.499479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.655 [2024-11-05 09:37:16.499502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83d40, cid 4, qid 0 00:14:30.655 [2024-11-05 09:37:16.499605] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.655 [2024-11-05 09:37:16.499613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.655 [2024-11-05 09:37:16.499617] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499621] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=4096, cccid=4 00:14:30.655 [2024-11-05 09:37:16.499626] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a83d40) on tqpair(0x1a1f750): expected_datao=0, payload_size=4096 00:14:30.655 [2024-11-05 09:37:16.499631] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499639] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499644] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.655 [2024-11-05 09:37:16.499659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.655 [2024-11-05 09:37:16.499663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83d40) on tqpair=0x1a1f750 00:14:30.655 [2024-11-05 09:37:16.499685] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499697] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.499721] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.655 [2024-11-05 09:37:16.499742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83d40, cid 4, qid 0 00:14:30.655 [2024-11-05 09:37:16.499804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.655 [2024-11-05 09:37:16.499812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.655 [2024-11-05 09:37:16.499816] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499820] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=4096, cccid=4 00:14:30.655 [2024-11-05 09:37:16.499825] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a83d40) on tqpair(0x1a1f750): expected_datao=0, payload_size=4096 00:14:30.655 [2024-11-05 09:37:16.499830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499851] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499857] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.655 [2024-11-05 09:37:16.499874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.655 [2024-11-05 09:37:16.499877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83d40) on tqpair=0x1a1f750 00:14:30.655 [2024-11-05 09:37:16.499892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499921] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499927] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499939] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:14:30.655 [2024-11-05 09:37:16.499944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:14:30.655 [2024-11-05 09:37:16.499950] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:14:30.655 [2024-11-05 09:37:16.499968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.499981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.655 [2024-11-05 09:37:16.499989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499993] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.499997] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.500004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:30.655 [2024-11-05 09:37:16.500032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83d40, cid 4, qid 0 00:14:30.655 [2024-11-05 09:37:16.500041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83ec0, cid 5, qid 0 00:14:30.655 [2024-11-05 09:37:16.500595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.655 [2024-11-05 09:37:16.503853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.655 [2024-11-05 09:37:16.503874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.503880] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83d40) on tqpair=0x1a1f750 00:14:30.655 [2024-11-05 09:37:16.503888] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.655 [2024-11-05 09:37:16.503895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.655 [2024-11-05 09:37:16.503899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.503903] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83ec0) on tqpair=0x1a1f750 00:14:30.655 [2024-11-05 09:37:16.503919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.503925] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.503934] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.655 [2024-11-05 09:37:16.503965] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83ec0, cid 5, qid 0 00:14:30.655 [2024-11-05 09:37:16.504025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.655 [2024-11-05 09:37:16.504032] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.655 [2024-11-05 09:37:16.504036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.504041] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83ec0) on tqpair=0x1a1f750 00:14:30.655 [2024-11-05 09:37:16.504052] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.504057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.504065] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.655 [2024-11-05 09:37:16.504084] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83ec0, cid 5, qid 0 00:14:30.655 [2024-11-05 09:37:16.504251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.655 [2024-11-05 09:37:16.504258] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.655 [2024-11-05 09:37:16.504262] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.504266] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83ec0) on tqpair=0x1a1f750 00:14:30.655 [2024-11-05 09:37:16.504277] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.504282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.504290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.655 [2024-11-05 09:37:16.504308] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83ec0, cid 5, qid 0 00:14:30.655 [2024-11-05 09:37:16.504559] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.655 [2024-11-05 09:37:16.504576] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.655 [2024-11-05 09:37:16.504581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.504585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83ec0) on tqpair=0x1a1f750 00:14:30.655 [2024-11-05 09:37:16.504609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.655 [2024-11-05 09:37:16.504615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1a1f750) 00:14:30.655 [2024-11-05 09:37:16.504623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.656 [2024-11-05 09:37:16.504632] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.504636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1a1f750) 00:14:30.656 [2024-11-05 09:37:16.504643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.656 [2024-11-05 09:37:16.504651] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.504656] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1a1f750) 00:14:30.656 [2024-11-05 09:37:16.504663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.656 [2024-11-05 09:37:16.504671] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.504675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a1f750) 00:14:30.656 [2024-11-05 09:37:16.504682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.656 [2024-11-05 09:37:16.504706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83ec0, cid 5, qid 0 00:14:30.656 [2024-11-05 09:37:16.504714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83d40, cid 4, qid 0 00:14:30.656 [2024-11-05 09:37:16.504720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a84040, cid 6, qid 0 00:14:30.656 [2024-11-05 09:37:16.504725] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a841c0, cid 7, qid 0 00:14:30.656 [2024-11-05 09:37:16.505211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.656 [2024-11-05 09:37:16.505230] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.656 [2024-11-05 09:37:16.505235] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505239] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=8192, cccid=5 00:14:30.656 [2024-11-05 09:37:16.505245] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a83ec0) on tqpair(0x1a1f750): expected_datao=0, payload_size=8192 00:14:30.656 [2024-11-05 09:37:16.505250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505269] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505275] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.656 [2024-11-05 09:37:16.505288] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.656 [2024-11-05 09:37:16.505291] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505296] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=512, cccid=4 00:14:30.656 [2024-11-05 09:37:16.505301] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a83d40) on tqpair(0x1a1f750): expected_datao=0, payload_size=512 00:14:30.656 [2024-11-05 09:37:16.505305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505312] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505316] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.656 [2024-11-05 09:37:16.505329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.656 [2024-11-05 09:37:16.505332] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505336] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=512, cccid=6 00:14:30.656 [2024-11-05 09:37:16.505341] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a84040) on tqpair(0x1a1f750): expected_datao=0, payload_size=512 00:14:30.656 [2024-11-05 09:37:16.505346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505352] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505356] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505362] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:30.656 [2024-11-05 09:37:16.505368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:30.656 [2024-11-05 09:37:16.505372] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505376] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1a1f750): datao=0, datal=4096, cccid=7 00:14:30.656 [2024-11-05 09:37:16.505381] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1a841c0) on tqpair(0x1a1f750): expected_datao=0, payload_size=4096 00:14:30.656 [2024-11-05 09:37:16.505386] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505393] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505397] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.656 [2024-11-05 09:37:16.505409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.656 [2024-11-05 09:37:16.505413] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505417] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83ec0) on tqpair=0x1a1f750 00:14:30.656 [2024-11-05 09:37:16.505435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.656 [2024-11-05 09:37:16.505443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.656 [2024-11-05 09:37:16.505447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.656 [2024-11-05 09:37:16.505451] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83d40) on tqpair=0x1a1f750 00:14:30.656 ===================================================== 00:14:30.656 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.656 ===================================================== 00:14:30.656 Controller Capabilities/Features 00:14:30.656 ================================ 00:14:30.656 Vendor ID: 8086 00:14:30.656 Subsystem Vendor ID: 8086 00:14:30.656 Serial Number: SPDK00000000000001 00:14:30.656 Model Number: SPDK bdev Controller 00:14:30.656 Firmware Version: 25.01 00:14:30.656 Recommended Arb Burst: 6 00:14:30.656 IEEE OUI Identifier: e4 d2 5c 00:14:30.656 Multi-path I/O 00:14:30.656 May have multiple subsystem ports: Yes 00:14:30.656 May have multiple controllers: Yes 00:14:30.656 Associated with SR-IOV VF: No 00:14:30.656 Max Data Transfer Size: 131072 00:14:30.656 Max Number of Namespaces: 32 00:14:30.656 Max Number of I/O Queues: 127 00:14:30.656 NVMe Specification Version (VS): 1.3 00:14:30.656 NVMe Specification Version (Identify): 1.3 00:14:30.656 Maximum Queue Entries: 128 00:14:30.656 Contiguous Queues Required: Yes 00:14:30.656 Arbitration Mechanisms Supported 00:14:30.656 Weighted Round Robin: Not Supported 00:14:30.656 Vendor Specific: Not Supported 00:14:30.656 Reset Timeout: 15000 ms 00:14:30.656 Doorbell Stride: 4 bytes 00:14:30.656 NVM Subsystem Reset: Not Supported 00:14:30.656 Command Sets Supported 00:14:30.656 NVM Command Set: Supported 00:14:30.656 Boot Partition: Not Supported 00:14:30.656 Memory Page Size Minimum: 4096 bytes 00:14:30.656 Memory Page Size Maximum: 4096 bytes 00:14:30.656 Persistent Memory Region: Not Supported 00:14:30.656 Optional Asynchronous Events Supported 00:14:30.656 Namespace Attribute Notices: Supported 00:14:30.656 Firmware Activation Notices: Not Supported 00:14:30.656 ANA Change Notices: Not Supported 00:14:30.656 PLE Aggregate Log Change Notices: Not Supported 00:14:30.656 LBA Status Info Alert Notices: Not Supported 00:14:30.656 EGE Aggregate Log Change Notices: Not Supported 00:14:30.656 Normal NVM Subsystem Shutdown event: Not Supported 00:14:30.656 Zone Descriptor Change Notices: Not Supported 00:14:30.656 Discovery Log Change Notices: Not Supported 00:14:30.656 Controller Attributes 00:14:30.656 128-bit Host Identifier: Supported 00:14:30.656 Non-Operational Permissive Mode: Not Supported 00:14:30.656 NVM Sets: Not Supported 00:14:30.656 Read Recovery Levels: Not Supported 00:14:30.656 Endurance Groups: Not Supported 00:14:30.656 Predictable Latency Mode: Not Supported 00:14:30.656 Traffic Based Keep ALive: Not Supported 00:14:30.656 Namespace Granularity: Not Supported 00:14:30.657 SQ Associations: Not Supported 00:14:30.657 UUID List: Not Supported 00:14:30.657 Multi-Domain Subsystem: Not Supported 00:14:30.657 Fixed Capacity Management: Not Supported 00:14:30.657 Variable Capacity Management: Not Supported 00:14:30.657 Delete Endurance Group: Not Supported 00:14:30.657 Delete NVM Set: Not Supported 00:14:30.657 Extended LBA Formats Supported: Not Supported 00:14:30.657 Flexible Data Placement Supported: Not Supported 00:14:30.657 00:14:30.657 Controller Memory Buffer Support 00:14:30.657 ================================ 00:14:30.657 Supported: No 00:14:30.657 00:14:30.657 Persistent Memory Region Support 00:14:30.657 ================================ 00:14:30.657 Supported: No 00:14:30.657 00:14:30.657 Admin Command Set Attributes 00:14:30.657 ============================ 00:14:30.657 Security Send/Receive: Not Supported 00:14:30.657 Format NVM: Not Supported 00:14:30.657 Firmware Activate/Download: Not Supported 00:14:30.657 Namespace Management: Not Supported 00:14:30.657 Device Self-Test: Not Supported 00:14:30.657 Directives: Not Supported 00:14:30.657 NVMe-MI: Not Supported 00:14:30.657 Virtualization Management: Not Supported 00:14:30.657 Doorbell Buffer Config: Not Supported 00:14:30.657 Get LBA Status Capability: Not Supported 00:14:30.657 Command & Feature Lockdown Capability: Not Supported 00:14:30.657 Abort Command Limit: 4 00:14:30.657 Async Event Request Limit: 4 00:14:30.657 Number of Firmware Slots: N/A 00:14:30.657 Firmware Slot 1 Read-Only: N/A 00:14:30.657 Firmware Activation Without Reset: N/A 00:14:30.657 Multiple Update Detection Support: N/A 00:14:30.657 Firmware Update Granularity: No Information Provided 00:14:30.657 Per-Namespace SMART Log: No 00:14:30.657 Asymmetric Namespace Access Log Page: Not Supported 00:14:30.657 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:30.657 Command Effects Log Page: Supported 00:14:30.657 Get Log Page Extended Data: Supported 00:14:30.657 Telemetry Log Pages: Not Supported 00:14:30.657 Persistent Event Log Pages: Not Supported 00:14:30.657 Supported Log Pages Log Page: May Support 00:14:30.657 Commands Supported & Effects Log Page: Not Supported 00:14:30.657 Feature Identifiers & Effects Log Page:May Support 00:14:30.657 NVMe-MI Commands & Effects Log Page: May Support 00:14:30.657 Data Area 4 for Telemetry Log: Not Supported 00:14:30.657 Error Log Page Entries Supported: 128 00:14:30.657 Keep Alive: Supported 00:14:30.657 Keep Alive Granularity: 10000 ms 00:14:30.657 00:14:30.657 NVM Command Set Attributes 00:14:30.657 ========================== 00:14:30.657 Submission Queue Entry Size 00:14:30.657 Max: 64 00:14:30.657 Min: 64 00:14:30.657 Completion Queue Entry Size 00:14:30.657 Max: 16 00:14:30.657 Min: 16 00:14:30.657 Number of Namespaces: 32 00:14:30.657 Compare Command: Supported 00:14:30.657 Write Uncorrectable Command: Not Supported 00:14:30.657 Dataset Management Command: Supported 00:14:30.657 Write Zeroes Command: Supported 00:14:30.657 Set Features Save Field: Not Supported 00:14:30.657 Reservations: Supported 00:14:30.657 Timestamp: Not Supported 00:14:30.657 Copy: Supported 00:14:30.657 Volatile Write Cache: Present 00:14:30.657 Atomic Write Unit (Normal): 1 00:14:30.657 Atomic Write Unit (PFail): 1 00:14:30.657 Atomic Compare & Write Unit: 1 00:14:30.657 Fused Compare & Write: Supported 00:14:30.657 Scatter-Gather List 00:14:30.657 SGL Command Set: Supported 00:14:30.657 SGL Keyed: Supported 00:14:30.657 SGL Bit Bucket Descriptor: Not Supported 00:14:30.657 SGL Metadata Pointer: Not Supported 00:14:30.657 Oversized SGL: Not Supported 00:14:30.657 SGL Metadata Address: Not Supported 00:14:30.657 SGL Offset: Supported 00:14:30.657 Transport SGL Data Block: Not Supported 00:14:30.657 Replay Protected Memory Block: Not Supported 00:14:30.657 00:14:30.657 Firmware Slot Information 00:14:30.657 ========================= 00:14:30.657 Active slot: 1 00:14:30.657 Slot 1 Firmware Revision: 25.01 00:14:30.657 00:14:30.657 00:14:30.657 Commands Supported and Effects 00:14:30.657 ============================== 00:14:30.657 Admin Commands 00:14:30.657 -------------- 00:14:30.657 Get Log Page (02h): Supported 00:14:30.657 Identify (06h): Supported 00:14:30.657 Abort (08h): Supported 00:14:30.657 Set Features (09h): Supported 00:14:30.657 Get Features (0Ah): Supported 00:14:30.657 Asynchronous Event Request (0Ch): Supported 00:14:30.657 Keep Alive (18h): Supported 00:14:30.657 I/O Commands 00:14:30.657 ------------ 00:14:30.657 Flush (00h): Supported LBA-Change 00:14:30.657 Write (01h): Supported LBA-Change 00:14:30.657 Read (02h): Supported 00:14:30.657 Compare (05h): Supported 00:14:30.657 Write Zeroes (08h): Supported LBA-Change 00:14:30.657 Dataset Management (09h): Supported LBA-Change 00:14:30.657 Copy (19h): Supported LBA-Change 00:14:30.657 00:14:30.657 Error Log 00:14:30.657 ========= 00:14:30.657 00:14:30.657 Arbitration 00:14:30.657 =========== 00:14:30.657 Arbitration Burst: 1 00:14:30.657 00:14:30.657 Power Management 00:14:30.657 ================ 00:14:30.657 Number of Power States: 1 00:14:30.657 Current Power State: Power State #0 00:14:30.657 Power State #0: 00:14:30.657 Max Power: 0.00 W 00:14:30.657 Non-Operational State: Operational 00:14:30.657 Entry Latency: Not Reported 00:14:30.657 Exit Latency: Not Reported 00:14:30.657 Relative Read Throughput: 0 00:14:30.657 Relative Read Latency: 0 00:14:30.657 Relative Write Throughput: 0 00:14:30.657 Relative Write Latency: 0 00:14:30.657 Idle Power: Not Reported 00:14:30.657 Active Power: Not Reported 00:14:30.657 Non-Operational Permissive Mode: Not Supported 00:14:30.657 00:14:30.657 Health Information 00:14:30.657 ================== 00:14:30.657 Critical Warnings: 00:14:30.657 Available Spare Space: OK 00:14:30.657 Temperature: OK 00:14:30.657 Device Reliability: OK 00:14:30.657 Read Only: No 00:14:30.657 Volatile Memory Backup: OK 00:14:30.657 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:30.657 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:30.657 Available Spare: 0% 00:14:30.657 Available Spare Threshold: 0% 00:14:30.657 Life Percentage Used:[2024-11-05 09:37:16.505464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.657 [2024-11-05 09:37:16.505470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.657 [2024-11-05 09:37:16.505474] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.657 [2024-11-05 09:37:16.505479] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a84040) on tqpair=0x1a1f750 00:14:30.657 [2024-11-05 09:37:16.505487] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.657 [2024-11-05 09:37:16.505493] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.657 [2024-11-05 09:37:16.505497] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.657 [2024-11-05 09:37:16.505501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a841c0) on tqpair=0x1a1f750 00:14:30.657 [2024-11-05 09:37:16.505609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.657 [2024-11-05 09:37:16.505616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1a1f750) 00:14:30.657 [2024-11-05 09:37:16.505625] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.657 [2024-11-05 09:37:16.505652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a841c0, cid 7, qid 0 00:14:30.657 [2024-11-05 09:37:16.506276] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.657 [2024-11-05 09:37:16.506299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.657 [2024-11-05 09:37:16.506310] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.657 [2024-11-05 09:37:16.506314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a841c0) on tqpair=0x1a1f750 00:14:30.657 [2024-11-05 09:37:16.506358] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:14:30.657 [2024-11-05 09:37:16.506371] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83740) on tqpair=0x1a1f750 00:14:30.657 [2024-11-05 09:37:16.506378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.658 [2024-11-05 09:37:16.506385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a838c0) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.506390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.658 [2024-11-05 09:37:16.506396] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83a40) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.506401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.658 [2024-11-05 09:37:16.506406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83bc0) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.506412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:30.658 [2024-11-05 09:37:16.506422] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.506427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.506431] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1f750) 00:14:30.658 [2024-11-05 09:37:16.506440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.658 [2024-11-05 09:37:16.506468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83bc0, cid 3, qid 0 00:14:30.658 [2024-11-05 09:37:16.506883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.658 [2024-11-05 09:37:16.506900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.658 [2024-11-05 09:37:16.506905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.506910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83bc0) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.506919] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.506924] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.506928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1f750) 00:14:30.658 [2024-11-05 09:37:16.506936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.658 [2024-11-05 09:37:16.506962] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83bc0, cid 3, qid 0 00:14:30.658 [2024-11-05 09:37:16.507254] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.658 [2024-11-05 09:37:16.507269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.658 [2024-11-05 09:37:16.507274] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.507278] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83bc0) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.507284] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:14:30.658 [2024-11-05 09:37:16.507290] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:14:30.658 [2024-11-05 09:37:16.507302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.507307] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.507311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1f750) 00:14:30.658 [2024-11-05 09:37:16.507319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.658 [2024-11-05 09:37:16.507340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83bc0, cid 3, qid 0 00:14:30.658 [2024-11-05 09:37:16.507394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.658 [2024-11-05 09:37:16.507401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.658 [2024-11-05 09:37:16.507405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.507415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83bc0) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.507427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.507432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.507436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1f750) 00:14:30.658 [2024-11-05 09:37:16.507444] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.658 [2024-11-05 09:37:16.507463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83bc0, cid 3, qid 0 00:14:30.658 [2024-11-05 09:37:16.511860] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.658 [2024-11-05 09:37:16.511884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.658 [2024-11-05 09:37:16.511890] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.511895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83bc0) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.511910] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.511915] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.511920] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1a1f750) 00:14:30.658 [2024-11-05 09:37:16.511929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:30.658 [2024-11-05 09:37:16.511957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1a83bc0, cid 3, qid 0 00:14:30.658 [2024-11-05 09:37:16.512014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:30.658 [2024-11-05 09:37:16.512021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:30.658 [2024-11-05 09:37:16.512025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:30.658 [2024-11-05 09:37:16.512030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1a83bc0) on tqpair=0x1a1f750 00:14:30.658 [2024-11-05 09:37:16.512039] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:14:30.658 0% 00:14:30.658 Data Units Read: 0 00:14:30.658 Data Units Written: 0 00:14:30.658 Host Read Commands: 0 00:14:30.658 Host Write Commands: 0 00:14:30.658 Controller Busy Time: 0 minutes 00:14:30.658 Power Cycles: 0 00:14:30.658 Power On Hours: 0 hours 00:14:30.658 Unsafe Shutdowns: 0 00:14:30.658 Unrecoverable Media Errors: 0 00:14:30.658 Lifetime Error Log Entries: 0 00:14:30.658 Warning Temperature Time: 0 minutes 00:14:30.658 Critical Temperature Time: 0 minutes 00:14:30.658 00:14:30.658 Number of Queues 00:14:30.658 ================ 00:14:30.658 Number of I/O Submission Queues: 127 00:14:30.658 Number of I/O Completion Queues: 127 00:14:30.658 00:14:30.658 Active Namespaces 00:14:30.658 ================= 00:14:30.658 Namespace ID:1 00:14:30.658 Error Recovery Timeout: Unlimited 00:14:30.658 Command Set Identifier: NVM (00h) 00:14:30.658 Deallocate: Supported 00:14:30.659 Deallocated/Unwritten Error: Not Supported 00:14:30.659 Deallocated Read Value: Unknown 00:14:30.659 Deallocate in Write Zeroes: Not Supported 00:14:30.659 Deallocated Guard Field: 0xFFFF 00:14:30.659 Flush: Supported 00:14:30.659 Reservation: Supported 00:14:30.659 Namespace Sharing Capabilities: Multiple Controllers 00:14:30.659 Size (in LBAs): 131072 (0GiB) 00:14:30.659 Capacity (in LBAs): 131072 (0GiB) 00:14:30.659 Utilization (in LBAs): 131072 (0GiB) 00:14:30.659 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:30.659 EUI64: ABCDEF0123456789 00:14:30.659 UUID: b9bd1874-6104-4032-9849-636de9a3f8fc 00:14:30.659 Thin Provisioning: Not Supported 00:14:30.659 Per-NS Atomic Units: Yes 00:14:30.659 Atomic Boundary Size (Normal): 0 00:14:30.659 Atomic Boundary Size (PFail): 0 00:14:30.659 Atomic Boundary Offset: 0 00:14:30.659 Maximum Single Source Range Length: 65535 00:14:30.659 Maximum Copy Length: 65535 00:14:30.659 Maximum Source Range Count: 1 00:14:30.659 NGUID/EUI64 Never Reused: No 00:14:30.659 Namespace Write Protected: No 00:14:30.659 Number of LBA Formats: 1 00:14:30.659 Current LBA Format: LBA Format #00 00:14:30.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:30.659 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:30.659 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:30.918 rmmod nvme_tcp 00:14:30.918 rmmod nvme_fabrics 00:14:30.918 rmmod nvme_keyring 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 73872 ']' 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 73872 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 73872 ']' 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 73872 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73872 00:14:30.918 killing process with pid 73872 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73872' 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 73872 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 73872 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:30.918 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:31.176 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:31.177 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.177 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:31.177 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:31.177 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:31.177 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:31.177 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:31.177 09:37:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:14:31.177 00:14:31.177 real 0m2.190s 00:14:31.177 user 0m4.441s 00:14:31.177 sys 0m0.718s 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:14:31.177 ************************************ 00:14:31.177 END TEST nvmf_identify 00:14:31.177 ************************************ 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:31.177 09:37:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:31.436 ************************************ 00:14:31.436 START TEST nvmf_perf 00:14:31.436 ************************************ 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:31.436 * Looking for test storage... 00:14:31.436 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.436 --rc genhtml_branch_coverage=1 00:14:31.436 --rc genhtml_function_coverage=1 00:14:31.436 --rc genhtml_legend=1 00:14:31.436 --rc geninfo_all_blocks=1 00:14:31.436 --rc geninfo_unexecuted_blocks=1 00:14:31.436 00:14:31.436 ' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.436 --rc genhtml_branch_coverage=1 00:14:31.436 --rc genhtml_function_coverage=1 00:14:31.436 --rc genhtml_legend=1 00:14:31.436 --rc geninfo_all_blocks=1 00:14:31.436 --rc geninfo_unexecuted_blocks=1 00:14:31.436 00:14:31.436 ' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.436 --rc genhtml_branch_coverage=1 00:14:31.436 --rc genhtml_function_coverage=1 00:14:31.436 --rc genhtml_legend=1 00:14:31.436 --rc geninfo_all_blocks=1 00:14:31.436 --rc geninfo_unexecuted_blocks=1 00:14:31.436 00:14:31.436 ' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:31.436 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:31.436 --rc genhtml_branch_coverage=1 00:14:31.436 --rc genhtml_function_coverage=1 00:14:31.436 --rc genhtml_legend=1 00:14:31.436 --rc geninfo_all_blocks=1 00:14:31.436 --rc geninfo_unexecuted_blocks=1 00:14:31.436 00:14:31.436 ' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.436 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:31.437 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:31.437 Cannot find device "nvmf_init_br" 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:31.437 Cannot find device "nvmf_init_br2" 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:14:31.437 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:31.696 Cannot find device "nvmf_tgt_br" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:31.696 Cannot find device "nvmf_tgt_br2" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:31.696 Cannot find device "nvmf_init_br" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:31.696 Cannot find device "nvmf_init_br2" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:31.696 Cannot find device "nvmf_tgt_br" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:31.696 Cannot find device "nvmf_tgt_br2" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:31.696 Cannot find device "nvmf_br" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:31.696 Cannot find device "nvmf_init_if" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:31.696 Cannot find device "nvmf_init_if2" 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:31.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:31.696 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:31.696 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:31.955 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:31.955 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.076 ms 00:14:31.955 00:14:31.955 --- 10.0.0.3 ping statistics --- 00:14:31.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.955 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:31.955 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:31.955 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:14:31.955 00:14:31.955 --- 10.0.0.4 ping statistics --- 00:14:31.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.955 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:31.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:31.955 00:14:31.955 --- 10.0.0.1 ping statistics --- 00:14:31.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.955 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:31.955 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:31.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:14:31.955 00:14:31.955 --- 10.0.0.2 ping statistics --- 00:14:31.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.956 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=74121 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 74121 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 74121 ']' 00:14:31.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:31.956 09:37:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:31.956 [2024-11-05 09:37:17.844120] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:31.956 [2024-11-05 09:37:17.844243] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.214 [2024-11-05 09:37:17.994938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.214 [2024-11-05 09:37:18.028771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.214 [2024-11-05 09:37:18.029036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.214 [2024-11-05 09:37:18.029231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.214 [2024-11-05 09:37:18.029361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.214 [2024-11-05 09:37:18.029398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.214 [2024-11-05 09:37:18.030284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.214 [2024-11-05 09:37:18.030490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.214 [2024-11-05 09:37:18.030494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.214 [2024-11-05 09:37:18.030355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.214 [2024-11-05 09:37:18.060785] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:32.214 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:32.214 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:14:32.214 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:32.214 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:32.214 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:32.215 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.215 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:32.215 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:32.782 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:32.782 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:33.040 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:14:33.040 09:37:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:33.299 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:33.299 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:14:33.299 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:33.299 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:33.299 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:33.875 [2024-11-05 09:37:19.538557] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.875 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.875 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:33.875 09:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:34.138 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:34.138 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:34.397 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:34.966 [2024-11-05 09:37:20.627958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:34.966 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:34.966 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:14:34.966 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:34.966 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:34.966 09:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:14:36.342 Initializing NVMe Controllers 00:14:36.342 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:14:36.342 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:14:36.342 Initialization complete. Launching workers. 00:14:36.342 ======================================================== 00:14:36.342 Latency(us) 00:14:36.342 Device Information : IOPS MiB/s Average min max 00:14:36.342 PCIE (0000:00:10.0) NSID 1 from core 0: 24271.30 94.81 1318.06 345.10 9610.18 00:14:36.342 ======================================================== 00:14:36.342 Total : 24271.30 94.81 1318.06 345.10 9610.18 00:14:36.342 00:14:36.342 09:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:37.718 Initializing NVMe Controllers 00:14:37.718 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.718 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:37.718 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:37.718 Initialization complete. Launching workers. 00:14:37.718 ======================================================== 00:14:37.718 Latency(us) 00:14:37.718 Device Information : IOPS MiB/s Average min max 00:14:37.718 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3779.32 14.76 264.28 101.23 7157.83 00:14:37.718 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.88 0.48 8072.12 4979.12 12012.15 00:14:37.718 ======================================================== 00:14:37.718 Total : 3903.19 15.25 512.08 101.23 12012.15 00:14:37.718 00:14:37.718 09:37:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:39.093 Initializing NVMe Controllers 00:14:39.093 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:39.093 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:39.093 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:39.093 Initialization complete. Launching workers. 00:14:39.093 ======================================================== 00:14:39.093 Latency(us) 00:14:39.093 Device Information : IOPS MiB/s Average min max 00:14:39.093 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8644.52 33.77 3703.05 574.24 9052.53 00:14:39.093 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3925.79 15.34 8182.41 5127.15 16937.05 00:14:39.093 ======================================================== 00:14:39.093 Total : 12570.32 49.10 5101.99 574.24 16937.05 00:14:39.093 00:14:39.093 09:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:39.093 09:37:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:14:41.625 Initializing NVMe Controllers 00:14:41.625 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.625 Controller IO queue size 128, less than required. 00:14:41.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.625 Controller IO queue size 128, less than required. 00:14:41.625 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.625 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:41.626 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:41.626 Initialization complete. Launching workers. 00:14:41.626 ======================================================== 00:14:41.626 Latency(us) 00:14:41.626 Device Information : IOPS MiB/s Average min max 00:14:41.626 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1895.46 473.87 68749.00 33796.68 111745.83 00:14:41.626 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 635.65 158.91 214497.63 66644.95 337214.08 00:14:41.626 ======================================================== 00:14:41.626 Total : 2531.12 632.78 105351.62 33796.68 337214.08 00:14:41.626 00:14:41.626 09:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:14:41.884 Initializing NVMe Controllers 00:14:41.884 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.884 Controller IO queue size 128, less than required. 00:14:41.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.884 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:41.884 Controller IO queue size 128, less than required. 00:14:41.884 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:41.884 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:41.884 WARNING: Some requested NVMe devices were skipped 00:14:41.884 No valid NVMe controllers or AIO or URING devices found 00:14:41.884 09:37:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:14:44.416 Initializing NVMe Controllers 00:14:44.416 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.416 Controller IO queue size 128, less than required. 00:14:44.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.416 Controller IO queue size 128, less than required. 00:14:44.416 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.416 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:44.416 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:44.416 Initialization complete. Launching workers. 00:14:44.416 00:14:44.416 ==================== 00:14:44.416 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:44.416 TCP transport: 00:14:44.416 polls: 9124 00:14:44.416 idle_polls: 4673 00:14:44.416 sock_completions: 4451 00:14:44.416 nvme_completions: 6745 00:14:44.416 submitted_requests: 10126 00:14:44.416 queued_requests: 1 00:14:44.416 00:14:44.416 ==================== 00:14:44.416 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:44.416 TCP transport: 00:14:44.416 polls: 9351 00:14:44.416 idle_polls: 4814 00:14:44.416 sock_completions: 4537 00:14:44.416 nvme_completions: 7003 00:14:44.416 submitted_requests: 10498 00:14:44.416 queued_requests: 1 00:14:44.416 ======================================================== 00:14:44.416 Latency(us) 00:14:44.416 Device Information : IOPS MiB/s Average min max 00:14:44.416 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1685.98 421.50 77301.70 42787.46 115133.17 00:14:44.416 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1750.48 437.62 73611.98 29042.26 114627.13 00:14:44.416 ======================================================== 00:14:44.416 Total : 3436.46 859.12 75422.21 29042.26 115133.17 00:14:44.416 00:14:44.416 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:14:44.416 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:44.675 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:44.675 rmmod nvme_tcp 00:14:44.675 rmmod nvme_fabrics 00:14:44.933 rmmod nvme_keyring 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 74121 ']' 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 74121 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 74121 ']' 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 74121 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74121 00:14:44.933 killing process with pid 74121 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74121' 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 74121 00:14:44.933 09:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 74121 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:45.501 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:14:45.760 ************************************ 00:14:45.760 END TEST nvmf_perf 00:14:45.760 ************************************ 00:14:45.760 00:14:45.760 real 0m14.404s 00:14:45.760 user 0m52.318s 00:14:45.760 sys 0m3.977s 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:45.760 ************************************ 00:14:45.760 START TEST nvmf_fio_host 00:14:45.760 ************************************ 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:14:45.760 * Looking for test storage... 00:14:45.760 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:45.760 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:46.020 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:46.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.021 --rc genhtml_branch_coverage=1 00:14:46.021 --rc genhtml_function_coverage=1 00:14:46.021 --rc genhtml_legend=1 00:14:46.021 --rc geninfo_all_blocks=1 00:14:46.021 --rc geninfo_unexecuted_blocks=1 00:14:46.021 00:14:46.021 ' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:46.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.021 --rc genhtml_branch_coverage=1 00:14:46.021 --rc genhtml_function_coverage=1 00:14:46.021 --rc genhtml_legend=1 00:14:46.021 --rc geninfo_all_blocks=1 00:14:46.021 --rc geninfo_unexecuted_blocks=1 00:14:46.021 00:14:46.021 ' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:46.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.021 --rc genhtml_branch_coverage=1 00:14:46.021 --rc genhtml_function_coverage=1 00:14:46.021 --rc genhtml_legend=1 00:14:46.021 --rc geninfo_all_blocks=1 00:14:46.021 --rc geninfo_unexecuted_blocks=1 00:14:46.021 00:14:46.021 ' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:46.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:46.021 --rc genhtml_branch_coverage=1 00:14:46.021 --rc genhtml_function_coverage=1 00:14:46.021 --rc genhtml_legend=1 00:14:46.021 --rc geninfo_all_blocks=1 00:14:46.021 --rc geninfo_unexecuted_blocks=1 00:14:46.021 00:14:46.021 ' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:46.021 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.021 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:46.022 Cannot find device "nvmf_init_br" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:46.022 Cannot find device "nvmf_init_br2" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:46.022 Cannot find device "nvmf_tgt_br" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:46.022 Cannot find device "nvmf_tgt_br2" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:46.022 Cannot find device "nvmf_init_br" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:46.022 Cannot find device "nvmf_init_br2" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:46.022 Cannot find device "nvmf_tgt_br" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:46.022 Cannot find device "nvmf_tgt_br2" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:46.022 Cannot find device "nvmf_br" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:46.022 Cannot find device "nvmf_init_if" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:46.022 Cannot find device "nvmf_init_if2" 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:46.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:46.022 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:46.022 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:46.281 09:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:46.281 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:46.281 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:14:46.281 00:14:46.281 --- 10.0.0.3 ping statistics --- 00:14:46.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.281 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:46.281 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:46.281 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:14:46.281 00:14:46.281 --- 10.0.0.4 ping statistics --- 00:14:46.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.281 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:46.281 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:46.281 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:14:46.281 00:14:46.281 --- 10.0.0.1 ping statistics --- 00:14:46.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.281 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:46.281 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:46.281 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:14:46.281 00:14:46.281 --- 10.0.0.2 ping statistics --- 00:14:46.281 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:46.281 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=74576 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 74576 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 74576 ']' 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:46.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:46.281 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:46.282 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:46.282 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:46.540 [2024-11-05 09:37:32.241567] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:46.540 [2024-11-05 09:37:32.241657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:46.540 [2024-11-05 09:37:32.392062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:46.540 [2024-11-05 09:37:32.431129] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.540 [2024-11-05 09:37:32.431194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.540 [2024-11-05 09:37:32.431208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:46.540 [2024-11-05 09:37:32.431218] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:46.540 [2024-11-05 09:37:32.431227] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.540 [2024-11-05 09:37:32.432090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:46.540 [2024-11-05 09:37:32.432568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:46.540 [2024-11-05 09:37:32.432716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:46.540 [2024-11-05 09:37:32.432748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:46.540 [2024-11-05 09:37:32.466284] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:46.816 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:46.816 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:14:46.816 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:47.083 [2024-11-05 09:37:32.797044] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.083 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:14:47.083 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:47.083 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:47.083 09:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:47.340 Malloc1 00:14:47.340 09:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:47.598 09:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:47.857 09:37:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:48.115 [2024-11-05 09:37:34.040704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:48.115 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:14:48.681 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:14:48.681 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:48.681 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:48.681 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:48.681 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:48.681 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:48.682 09:37:34 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:14:48.682 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:14:48.682 fio-3.35 00:14:48.682 Starting 1 thread 00:14:51.213 00:14:51.213 test: (groupid=0, jobs=1): err= 0: pid=74656: Tue Nov 5 09:37:36 2024 00:14:51.213 read: IOPS=8735, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec) 00:14:51.213 slat (usec): min=2, max=314, avg= 2.44, stdev= 2.99 00:14:51.213 clat (usec): min=2551, max=14014, avg=7633.52, stdev=531.74 00:14:51.213 lat (usec): min=2586, max=14016, avg=7635.96, stdev=531.45 00:14:51.213 clat percentiles (usec): 00:14:51.213 | 1.00th=[ 6587], 5.00th=[ 6915], 10.00th=[ 7046], 20.00th=[ 7242], 00:14:51.213 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7701], 00:14:51.213 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[ 8455], 00:14:51.213 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[12518], 99.95th=[13435], 00:14:51.213 | 99.99th=[13960] 00:14:51.213 bw ( KiB/s): min=34131, max=35592, per=99.93%, avg=34918.75, stdev=601.38, samples=4 00:14:51.213 iops : min= 8532, max= 8898, avg=8729.50, stdev=150.67, samples=4 00:14:51.213 write: IOPS=8732, BW=34.1MiB/s (35.8MB/s)(68.5MiB/2007msec); 0 zone resets 00:14:51.213 slat (usec): min=2, max=257, avg= 2.57, stdev= 2.15 00:14:51.213 clat (usec): min=2383, max=13398, avg=6958.10, stdev=483.71 00:14:51.213 lat (usec): min=2397, max=13400, avg=6960.66, stdev=483.51 00:14:51.213 clat percentiles (usec): 00:14:51.213 | 1.00th=[ 5997], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6652], 00:14:51.213 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7046], 00:14:51.213 | 70.00th=[ 7111], 80.00th=[ 7242], 90.00th=[ 7439], 95.00th=[ 7635], 00:14:51.213 | 99.00th=[ 8094], 99.50th=[ 8291], 99.90th=[11731], 99.95th=[12518], 00:14:51.213 | 99.99th=[13435] 00:14:51.213 bw ( KiB/s): min=34752, max=35144, per=99.94%, avg=34912.50, stdev=174.54, samples=4 00:14:51.213 iops : min= 8688, max= 8786, avg=8728.00, stdev=43.60, samples=4 00:14:51.213 lat (msec) : 4=0.08%, 10=99.69%, 20=0.23% 00:14:51.213 cpu : usr=70.89%, sys=22.18%, ctx=32, majf=0, minf=7 00:14:51.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:14:51.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:51.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:51.213 issued rwts: total=17533,17527,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:51.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:51.213 00:14:51.213 Run status group 0 (all jobs): 00:14:51.213 READ: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:14:51.213 WRITE: bw=34.1MiB/s (35.8MB/s), 34.1MiB/s-34.1MiB/s (35.8MB/s-35.8MB/s), io=68.5MiB (71.8MB), run=2007-2007msec 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:14:51.213 09:37:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:14:51.213 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:14:51.213 fio-3.35 00:14:51.213 Starting 1 thread 00:14:53.746 00:14:53.746 test: (groupid=0, jobs=1): err= 0: pid=74700: Tue Nov 5 09:37:39 2024 00:14:53.746 read: IOPS=8100, BW=127MiB/s (133MB/s)(254MiB/2009msec) 00:14:53.746 slat (usec): min=2, max=161, avg= 3.90, stdev= 2.42 00:14:53.746 clat (usec): min=3158, max=18521, avg=8858.91, stdev=3124.34 00:14:53.746 lat (usec): min=3162, max=18525, avg=8862.81, stdev=3124.38 00:14:53.746 clat percentiles (usec): 00:14:53.746 | 1.00th=[ 4015], 5.00th=[ 4752], 10.00th=[ 5276], 20.00th=[ 6128], 00:14:53.746 | 30.00th=[ 6849], 40.00th=[ 7570], 50.00th=[ 8225], 60.00th=[ 8979], 00:14:53.746 | 70.00th=[10028], 80.00th=[11469], 90.00th=[13435], 95.00th=[15401], 00:14:53.746 | 99.00th=[17171], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:14:53.746 | 99.99th=[18482] 00:14:53.746 bw ( KiB/s): min=61376, max=73760, per=51.59%, avg=66864.00, stdev=5122.03, samples=4 00:14:53.746 iops : min= 3836, max= 4610, avg=4179.00, stdev=320.13, samples=4 00:14:53.746 write: IOPS=4568, BW=71.4MiB/s (74.9MB/s)(136MiB/1908msec); 0 zone resets 00:14:53.746 slat (usec): min=31, max=468, avg=39.36, stdev= 9.75 00:14:53.746 clat (usec): min=3853, max=22666, avg=12179.01, stdev=2167.82 00:14:53.746 lat (usec): min=3888, max=22703, avg=12218.37, stdev=2167.73 00:14:53.746 clat percentiles (usec): 00:14:53.746 | 1.00th=[ 8094], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:14:53.746 | 30.00th=[10945], 40.00th=[11469], 50.00th=[11863], 60.00th=[12387], 00:14:53.746 | 70.00th=[13042], 80.00th=[13960], 90.00th=[15270], 95.00th=[16188], 00:14:53.746 | 99.00th=[17695], 99.50th=[18482], 99.90th=[20317], 99.95th=[20579], 00:14:53.746 | 99.99th=[22676] 00:14:53.747 bw ( KiB/s): min=62400, max=77120, per=94.85%, avg=69336.00, stdev=6081.76, samples=4 00:14:53.747 iops : min= 3900, max= 4820, avg=4333.50, stdev=380.11, samples=4 00:14:53.747 lat (msec) : 4=0.63%, 10=49.45%, 20=49.88%, 50=0.05% 00:14:53.747 cpu : usr=82.22%, sys=13.25%, ctx=18, majf=0, minf=4 00:14:53.747 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:14:53.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:53.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:14:53.747 issued rwts: total=16274,8717,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:53.747 latency : target=0, window=0, percentile=100.00%, depth=128 00:14:53.747 00:14:53.747 Run status group 0 (all jobs): 00:14:53.747 READ: bw=127MiB/s (133MB/s), 127MiB/s-127MiB/s (133MB/s-133MB/s), io=254MiB (267MB), run=2009-2009msec 00:14:53.747 WRITE: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=136MiB (143MB), run=1908-1908msec 00:14:53.747 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:53.747 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:14:53.747 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:14:53.747 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:14:53.747 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:14:53.747 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:53.747 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:54.006 rmmod nvme_tcp 00:14:54.006 rmmod nvme_fabrics 00:14:54.006 rmmod nvme_keyring 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 74576 ']' 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 74576 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 74576 ']' 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 74576 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74576 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:54.006 killing process with pid 74576 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74576' 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 74576 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 74576 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:14:54.006 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:54.264 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:54.264 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:54.264 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:54.264 09:37:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:14:54.264 00:14:54.264 real 0m8.589s 00:14:54.264 user 0m34.644s 00:14:54.264 sys 0m2.343s 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.264 09:37:40 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.264 ************************************ 00:14:54.264 END TEST nvmf_fio_host 00:14:54.264 ************************************ 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:14:54.524 ************************************ 00:14:54.524 START TEST nvmf_failover 00:14:54.524 ************************************ 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:14:54.524 * Looking for test storage... 00:14:54.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:54.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.524 --rc genhtml_branch_coverage=1 00:14:54.524 --rc genhtml_function_coverage=1 00:14:54.524 --rc genhtml_legend=1 00:14:54.524 --rc geninfo_all_blocks=1 00:14:54.524 --rc geninfo_unexecuted_blocks=1 00:14:54.524 00:14:54.524 ' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:54.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.524 --rc genhtml_branch_coverage=1 00:14:54.524 --rc genhtml_function_coverage=1 00:14:54.524 --rc genhtml_legend=1 00:14:54.524 --rc geninfo_all_blocks=1 00:14:54.524 --rc geninfo_unexecuted_blocks=1 00:14:54.524 00:14:54.524 ' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:54.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.524 --rc genhtml_branch_coverage=1 00:14:54.524 --rc genhtml_function_coverage=1 00:14:54.524 --rc genhtml_legend=1 00:14:54.524 --rc geninfo_all_blocks=1 00:14:54.524 --rc geninfo_unexecuted_blocks=1 00:14:54.524 00:14:54.524 ' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:54.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:54.524 --rc genhtml_branch_coverage=1 00:14:54.524 --rc genhtml_function_coverage=1 00:14:54.524 --rc genhtml_legend=1 00:14:54.524 --rc geninfo_all_blocks=1 00:14:54.524 --rc geninfo_unexecuted_blocks=1 00:14:54.524 00:14:54.524 ' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:54.524 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:54.524 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:54.525 Cannot find device "nvmf_init_br" 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:54.525 Cannot find device "nvmf_init_br2" 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:54.525 Cannot find device "nvmf_tgt_br" 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:14:54.525 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:54.783 Cannot find device "nvmf_tgt_br2" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:54.783 Cannot find device "nvmf_init_br" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:54.783 Cannot find device "nvmf_init_br2" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:54.783 Cannot find device "nvmf_tgt_br" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:54.783 Cannot find device "nvmf_tgt_br2" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:54.783 Cannot find device "nvmf_br" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:54.783 Cannot find device "nvmf_init_if" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:54.783 Cannot find device "nvmf_init_if2" 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:54.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:54.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:54.783 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:54.784 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:55.042 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:55.042 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:14:55.042 00:14:55.042 --- 10.0.0.3 ping statistics --- 00:14:55.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.042 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:55.042 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:55.042 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:14:55.042 00:14:55.042 --- 10.0.0.4 ping statistics --- 00:14:55.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.042 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:14:55.042 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:55.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:55.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:55.042 00:14:55.042 --- 10.0.0.1 ping statistics --- 00:14:55.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.042 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:55.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:55.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:14:55.043 00:14:55.043 --- 10.0.0.2 ping statistics --- 00:14:55.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:55.043 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=74975 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 74975 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 74975 ']' 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.043 09:37:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:55.043 [2024-11-05 09:37:40.889555] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:14:55.043 [2024-11-05 09:37:40.890150] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:55.301 [2024-11-05 09:37:41.039217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:55.301 [2024-11-05 09:37:41.070535] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:55.301 [2024-11-05 09:37:41.070607] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:55.302 [2024-11-05 09:37:41.070635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:55.302 [2024-11-05 09:37:41.070643] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:55.302 [2024-11-05 09:37:41.070650] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:55.302 [2024-11-05 09:37:41.071393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.302 [2024-11-05 09:37:41.071538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:55.302 [2024-11-05 09:37:41.071542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.302 [2024-11-05 09:37:41.100812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:56.238 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:56.238 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:14:56.238 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:56.238 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:56.238 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:56.238 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.238 09:37:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:56.512 [2024-11-05 09:37:42.260350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.512 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:14:56.770 Malloc0 00:14:56.770 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:57.030 09:37:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:57.288 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:57.548 [2024-11-05 09:37:43.420616] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:57.548 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:14:57.807 [2024-11-05 09:37:43.680815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:14:57.807 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:14:58.066 [2024-11-05 09:37:43.973163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=75033 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:58.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 75033 /var/tmp/bdevperf.sock 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75033 ']' 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:58.066 09:37:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:14:58.635 09:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:58.635 09:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:14:58.635 09:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:58.894 NVMe0n1 00:14:58.894 09:37:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:14:59.153 00:14:59.153 09:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=75049 00:14:59.153 09:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:59.153 09:37:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:15:00.089 09:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:00.657 09:37:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:15:03.943 09:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:03.943 00:15:03.943 09:37:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:04.201 09:37:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:15:07.483 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:07.483 [2024-11-05 09:37:53.312279] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:07.483 09:37:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:15:08.417 09:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:08.983 09:37:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 75049 00:15:14.244 { 00:15:14.244 "results": [ 00:15:14.244 { 00:15:14.244 "job": "NVMe0n1", 00:15:14.244 "core_mask": "0x1", 00:15:14.244 "workload": "verify", 00:15:14.244 "status": "finished", 00:15:14.244 "verify_range": { 00:15:14.244 "start": 0, 00:15:14.244 "length": 16384 00:15:14.244 }, 00:15:14.244 "queue_depth": 128, 00:15:14.244 "io_size": 4096, 00:15:14.244 "runtime": 15.009433, 00:15:14.244 "iops": 8791.138212882524, 00:15:14.244 "mibps": 34.34038364407236, 00:15:14.244 "io_failed": 3053, 00:15:14.244 "io_timeout": 0, 00:15:14.244 "avg_latency_us": 14197.126937273448, 00:15:14.244 "min_latency_us": 692.5963636363637, 00:15:14.244 "max_latency_us": 29074.15272727273 00:15:14.244 } 00:15:14.244 ], 00:15:14.244 "core_count": 1 00:15:14.244 } 00:15:14.244 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 75033 00:15:14.244 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75033 ']' 00:15:14.244 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75033 00:15:14.244 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75033 00:15:14.509 killing process with pid 75033 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75033' 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75033 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75033 00:15:14.509 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:14.509 [2024-11-05 09:37:44.043276] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:15:14.509 [2024-11-05 09:37:44.043379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75033 ] 00:15:14.509 [2024-11-05 09:37:44.183284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.509 [2024-11-05 09:37:44.215897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.509 [2024-11-05 09:37:44.245096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:14.509 Running I/O for 15 seconds... 00:15:14.509 6805.00 IOPS, 26.58 MiB/s [2024-11-05T09:38:00.467Z] [2024-11-05 09:37:46.330539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.509 [2024-11-05 09:37:46.331119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.331244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.509 [2024-11-05 09:37:46.331337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.331412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.509 [2024-11-05 09:37:46.331490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.331559] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.509 [2024-11-05 09:37:46.331628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.331716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2495710 is same with the state(6) to be set 00:15:14.509 [2024-11-05 09:37:46.332151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.509 [2024-11-05 09:37:46.332294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.332395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.332477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.332545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.332630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.332696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.332772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.332921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.333021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.333116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.333206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.333356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.333436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.333513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.333610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.333684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.333787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.333868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.333995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.334079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.334163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.334271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.334354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.334435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.334518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.334599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.509 [2024-11-05 09:37:46.334693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.509 [2024-11-05 09:37:46.334767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.334845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.334925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.335018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.335093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.335167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.335233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.335297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.335370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.335471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.335537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.335617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.335690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.335791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.335916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.336002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.336083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.336166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.336236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.336319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.336435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.336519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.336612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.336703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.336795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.336917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.337974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.337988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.338003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.338016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.338031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.338045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.338060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.510 [2024-11-05 09:37:46.338073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.510 [2024-11-05 09:37:46.338088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:64312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:64328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:64336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:64344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:64352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:64376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.338964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:64384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.338978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:64392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:64400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:64432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:64456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:64464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:64472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:64480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:64488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.511 [2024-11-05 09:37:46.339461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.511 [2024-11-05 09:37:46.339476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:64504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:64512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:64520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:64528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:64536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:64552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:64560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:64584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:64592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:64600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:64608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:64616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.339975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.339990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:64632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.340019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:64640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.340049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:64648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.340078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:63656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:63664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:63672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:63688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:63696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:63720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:63736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:63760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:63768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.512 [2024-11-05 09:37:46.340621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:64656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.512 [2024-11-05 09:37:46.340652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2531fc0 is same with the state(6) to be set 00:15:14.512 [2024-11-05 09:37:46.340721] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.512 [2024-11-05 09:37:46.340748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.512 [2024-11-05 09:37:46.340759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64664 len:8 PRP1 0x0 PRP2 0x0 00:15:14.512 [2024-11-05 09:37:46.340773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.512 [2024-11-05 09:37:46.340843] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:14.512 [2024-11-05 09:37:46.340878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:15:14.512 [2024-11-05 09:37:46.347747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2495710 (9): Bad file descriptor 00:15:14.512 [2024-11-05 09:37:46.351665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:14.512 [2024-11-05 09:37:46.382303] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:15:14.512 7437.50 IOPS, 29.05 MiB/s [2024-11-05T09:38:00.470Z] 8002.33 IOPS, 31.26 MiB/s [2024-11-05T09:38:00.470Z] 8281.50 IOPS, 32.35 MiB/s [2024-11-05T09:38:00.470Z] [2024-11-05 09:37:49.996514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.997045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.997169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:72720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.997281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.997358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.997428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.997496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.997564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.997632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.997717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.997807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.997904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.997979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.998062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.998134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:49.998217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.998288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.998368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.998440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.998532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.998613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.998683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.998751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.998830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.998923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.999012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.999093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.999200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.999283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.999366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.999436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.999503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.999586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.999666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.999748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:49.999832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:49.999942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:50.000039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.000110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:50.000188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.000258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:50.000345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.000424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:50.000507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.000577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:50.000655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.000734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.513 [2024-11-05 09:37:50.000821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.000951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.001043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.001115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.001203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.001299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:72792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.001383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.001453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.001530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.001600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.001668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.001734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.001823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.001921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:72824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.002005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.513 [2024-11-05 09:37:50.002087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.513 [2024-11-05 09:37:50.002172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.002270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:72264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.002357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.002441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:72272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.002525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.002595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.002680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.002759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.002852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.002946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:72296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.003027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.003109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:72304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.003193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.003263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.003376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.003450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.003519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.003605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.003675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.003753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.003822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.003927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:72344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.004019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.004090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:72352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.004169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.004239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:72360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.004331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.004401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:72368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.004484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.004553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:72376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.004631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.004709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:72384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.004779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.004891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:72840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.004986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.005068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.005156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.005228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.005306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.005391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:72864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.005479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.005549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.005627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.005697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:72880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.005775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.005859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:72888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.005944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.006016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.006083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.006149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:72392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.006227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.006307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.006397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.006467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:72408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.006545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.006625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:72416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.006704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.006774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:72424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.006858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.006945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:72432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.007035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.007106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:72440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.007185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.007254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:72448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.514 [2024-11-05 09:37:50.007332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.007426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:72904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.007513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.007597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:72912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.007666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.007733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.007800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.007896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.007984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.008064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.008134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.008201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:72944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.008278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.008348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:72952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.008426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.514 [2024-11-05 09:37:50.008505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.514 [2024-11-05 09:37:50.008594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.008664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:72456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.008758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.008832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.008949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.009024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:72472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.009092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.009173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.009252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.009332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:72488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.009430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.009511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:72496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.009589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.009669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:72504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.009753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.009832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.009923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.010005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.010075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.010143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:72528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.010209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.010276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.010361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.010431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:72544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.010516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.010587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.010664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.010744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.010846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.010938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.011020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.011101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:72576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.011168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.011235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:72968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.011312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.011395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.011480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.011551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.011630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.011696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.011773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.011863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:73000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.011956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.012038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:73008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.012109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.012188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:73016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.012258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.012336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:73024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.012406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.012474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:73032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.012557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.012628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:73040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.012714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.012794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:73048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.012917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.012995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:73056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.013075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.013154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:73064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.013234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.013313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:73072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.013397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.013475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:73080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.013553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.013624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:73088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.515 [2024-11-05 09:37:50.013693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.013770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:72584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.013872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.013947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:72592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.014029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.014100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:72600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.014178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.014257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:72608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.014336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.014405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:72616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.014483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.014552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:72624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.014629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.014699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:72632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.014777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.515 [2024-11-05 09:37:50.014862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:72640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.515 [2024-11-05 09:37:50.014947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.015029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:72648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.516 [2024-11-05 09:37:50.015098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.015166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.516 [2024-11-05 09:37:50.015232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.015324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.516 [2024-11-05 09:37:50.015407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.015486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:72672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.516 [2024-11-05 09:37:50.015574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.015644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:72680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.516 [2024-11-05 09:37:50.015711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.015778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:72688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.516 [2024-11-05 09:37:50.015875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.015961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:72696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.516 [2024-11-05 09:37:50.016049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.016117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25329e0 is same with the state(6) to be set 00:15:14.516 [2024-11-05 09:37:50.016206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.016271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.016334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72704 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.016399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.016464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.016523] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.016592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73096 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.016659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.016738] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.016801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.016900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73104 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.016973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.017038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.017096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.017155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73112 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.017218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.017292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.017353] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.017450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73120 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.017530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.017597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.017619] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.017633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73128 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.017648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.017663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.017675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.017687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73136 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.017701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.017717] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.017728] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.017739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73144 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.017754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.017770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.516 [2024-11-05 09:37:50.017781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.516 [2024-11-05 09:37:50.017792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:73152 len:8 PRP1 0x0 PRP2 0x0 00:15:14.516 [2024-11-05 09:37:50.017807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.017888] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:15:14.516 [2024-11-05 09:37:50.017977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.516 [2024-11-05 09:37:50.018001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.018019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.516 [2024-11-05 09:37:50.018034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.018050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.516 [2024-11-05 09:37:50.018065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.018080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.516 [2024-11-05 09:37:50.018095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:50.018111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:15:14.516 [2024-11-05 09:37:50.018181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2495710 (9): Bad file descriptor 00:15:14.516 [2024-11-05 09:37:50.022177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:15:14.516 [2024-11-05 09:37:50.044311] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:15:14.516 8339.40 IOPS, 32.58 MiB/s [2024-11-05T09:38:00.474Z] 8471.33 IOPS, 33.09 MiB/s [2024-11-05T09:38:00.474Z] 8551.71 IOPS, 33.41 MiB/s [2024-11-05T09:38:00.474Z] 8627.38 IOPS, 33.70 MiB/s [2024-11-05T09:38:00.474Z] 8685.11 IOPS, 33.93 MiB/s [2024-11-05T09:38:00.474Z] [2024-11-05 09:37:54.619511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.516 [2024-11-05 09:37:54.620460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.516 [2024-11-05 09:37:54.620475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.620506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.620563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.620595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.620626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.620657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:20000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.620688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.620960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.620974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:20048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:20080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:20096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:20112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:20128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.517 [2024-11-05 09:37:54.621527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:19568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:19576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:19600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:19608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:19624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.517 [2024-11-05 09:37:54.621851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.517 [2024-11-05 09:37:54.621872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.621889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:19640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.621904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.621921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:19648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.621936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.621954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.621969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.621986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:19672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:20136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:20144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.518 [2024-11-05 09:37:54.622587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:19736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.622978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.622993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.623009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.518 [2024-11-05 09:37:54.623024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.518 [2024-11-05 09:37:54.623041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:20288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:20344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:20368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:15:14.519 [2024-11-05 09:37:54.623646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.519 [2024-11-05 09:37:54.623890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623908] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25326a0 is same with the state(6) to be set 00:15:14.519 [2024-11-05 09:37:54.623928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.623939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.623958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19872 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.623974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.623990] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.624001] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.624012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20392 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.624027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.624042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.624053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.624064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20400 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.624079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.624094] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.624105] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.624116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20408 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.624131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.624146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.624157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.624168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.624182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.624197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.624208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.624219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20424 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.624233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.624248] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.624259] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.624271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20432 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.624285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.519 [2024-11-05 09:37:54.624299] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.519 [2024-11-05 09:37:54.624310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.519 [2024-11-05 09:37:54.624325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20440 len:8 PRP1 0x0 PRP2 0x0 00:15:14.519 [2024-11-05 09:37:54.624340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20456 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20464 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20472 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624580] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624621] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20488 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624672] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20496 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20504 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:15:14.520 [2024-11-05 09:37:54.624797] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:15:14.520 [2024-11-05 09:37:54.624809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:8 PRP1 0x0 PRP2 0x0 00:15:14.520 [2024-11-05 09:37:54.624823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.624894] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:15:14.520 [2024-11-05 09:37:54.624961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.520 [2024-11-05 09:37:54.624985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.625001] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.520 [2024-11-05 09:37:54.625016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.625032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.520 [2024-11-05 09:37:54.625057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.625073] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:14.520 [2024-11-05 09:37:54.625087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.520 [2024-11-05 09:37:54.625102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:15:14.520 [2024-11-05 09:37:54.625156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2495710 (9): Bad file descriptor 00:15:14.520 [2024-11-05 09:37:54.629088] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:15:14.520 [2024-11-05 09:37:54.651447] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:15:14.520 8683.80 IOPS, 33.92 MiB/s [2024-11-05T09:38:00.478Z] 8708.91 IOPS, 34.02 MiB/s [2024-11-05T09:38:00.478Z] 8734.50 IOPS, 34.12 MiB/s [2024-11-05T09:38:00.478Z] 8756.77 IOPS, 34.21 MiB/s [2024-11-05T09:38:00.478Z] 8774.71 IOPS, 34.28 MiB/s [2024-11-05T09:38:00.478Z] 8791.33 IOPS, 34.34 MiB/s 00:15:14.520 Latency(us) 00:15:14.520 [2024-11-05T09:38:00.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.520 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:14.520 Verification LBA range: start 0x0 length 0x4000 00:15:14.520 NVMe0n1 : 15.01 8791.14 34.34 203.41 0.00 14197.13 692.60 29074.15 00:15:14.520 [2024-11-05T09:38:00.478Z] =================================================================================================================== 00:15:14.520 [2024-11-05T09:38:00.478Z] Total : 8791.14 34.34 203.41 0.00 14197.13 692.60 29074.15 00:15:14.520 Received shutdown signal, test time was about 15.000000 seconds 00:15:14.520 00:15:14.520 Latency(us) 00:15:14.520 [2024-11-05T09:38:00.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.520 [2024-11-05T09:38:00.478Z] =================================================================================================================== 00:15:14.520 [2024-11-05T09:38:00.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=75227 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 75227 /var/tmp/bdevperf.sock 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 75227 ']' 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:14.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:14.520 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:14.779 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:14.779 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:15:14.779 09:38:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:15.344 [2024-11-05 09:38:01.002757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:15.344 09:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:15:15.344 [2024-11-05 09:38:01.299059] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:15:15.602 09:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:15.859 NVMe0n1 00:15:15.859 09:38:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:16.117 00:15:16.117 09:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:15:16.375 00:15:16.633 09:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:16.633 09:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:15:16.891 09:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:17.149 09:38:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:15:20.452 09:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:20.452 09:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:15:20.452 09:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=75297 00:15:20.452 09:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:20.452 09:38:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 75297 00:15:21.828 { 00:15:21.828 "results": [ 00:15:21.828 { 00:15:21.828 "job": "NVMe0n1", 00:15:21.828 "core_mask": "0x1", 00:15:21.828 "workload": "verify", 00:15:21.828 "status": "finished", 00:15:21.828 "verify_range": { 00:15:21.828 "start": 0, 00:15:21.828 "length": 16384 00:15:21.828 }, 00:15:21.828 "queue_depth": 128, 00:15:21.828 "io_size": 4096, 00:15:21.828 "runtime": 1.010679, 00:15:21.828 "iops": 6734.086688256113, 00:15:21.828 "mibps": 26.305026126000442, 00:15:21.828 "io_failed": 0, 00:15:21.828 "io_timeout": 0, 00:15:21.828 "avg_latency_us": 18929.691410252988, 00:15:21.828 "min_latency_us": 2457.6, 00:15:21.828 "max_latency_us": 15490.327272727272 00:15:21.828 } 00:15:21.828 ], 00:15:21.828 "core_count": 1 00:15:21.828 } 00:15:21.828 09:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:21.828 [2024-11-05 09:38:00.444374] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:15:21.828 [2024-11-05 09:38:00.444502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75227 ] 00:15:21.828 [2024-11-05 09:38:00.590568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:21.828 [2024-11-05 09:38:00.622816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:21.828 [2024-11-05 09:38:00.651528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:21.828 [2024-11-05 09:38:02.892363] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:15:21.828 [2024-11-05 09:38:02.892494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.828 [2024-11-05 09:38:02.892522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.828 [2024-11-05 09:38:02.892541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.828 [2024-11-05 09:38:02.892555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.828 [2024-11-05 09:38:02.892569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.828 [2024-11-05 09:38:02.892583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.828 [2024-11-05 09:38:02.892598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.828 [2024-11-05 09:38:02.892612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.828 [2024-11-05 09:38:02.892626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:15:21.828 [2024-11-05 09:38:02.892678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:15:21.828 [2024-11-05 09:38:02.892723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7de710 (9): Bad file descriptor 00:15:21.828 [2024-11-05 09:38:02.895027] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:15:21.828 Running I/O for 1 seconds... 00:15:21.828 6678.00 IOPS, 26.09 MiB/s 00:15:21.828 Latency(us) 00:15:21.828 [2024-11-05T09:38:07.786Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:21.828 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:21.828 Verification LBA range: start 0x0 length 0x4000 00:15:21.828 NVMe0n1 : 1.01 6734.09 26.31 0.00 0.00 18929.69 2457.60 15490.33 00:15:21.828 [2024-11-05T09:38:07.786Z] =================================================================================================================== 00:15:21.828 [2024-11-05T09:38:07.786Z] Total : 6734.09 26.31 0.00 0.00 18929.69 2457.60 15490.33 00:15:21.828 09:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:15:21.828 09:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:21.828 09:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.087 09:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:22.087 09:38:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:15:22.345 09:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:22.604 09:38:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:15:25.890 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:25.890 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 75227 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 75227 ']' 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 75227 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75227 00:15:26.149 killing process with pid 75227 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75227' 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 75227 00:15:26.149 09:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 75227 00:15:26.149 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:15:26.149 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:26.717 rmmod nvme_tcp 00:15:26.717 rmmod nvme_fabrics 00:15:26.717 rmmod nvme_keyring 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 74975 ']' 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 74975 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 74975 ']' 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 74975 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74975 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:26.717 killing process with pid 74975 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74975' 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 74975 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 74975 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:26.717 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:15:26.976 00:15:26.976 real 0m32.665s 00:15:26.976 user 2m6.405s 00:15:26.976 sys 0m5.259s 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:26.976 ************************************ 00:15:26.976 END TEST nvmf_failover 00:15:26.976 ************************************ 00:15:26.976 09:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:15:27.236 09:38:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:27.236 09:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:27.236 09:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:27.236 09:38:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:27.236 ************************************ 00:15:27.236 START TEST nvmf_host_discovery 00:15:27.236 ************************************ 00:15:27.236 09:38:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:15:27.236 * Looking for test storage... 00:15:27.236 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:27.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.236 --rc genhtml_branch_coverage=1 00:15:27.236 --rc genhtml_function_coverage=1 00:15:27.236 --rc genhtml_legend=1 00:15:27.236 --rc geninfo_all_blocks=1 00:15:27.236 --rc geninfo_unexecuted_blocks=1 00:15:27.236 00:15:27.236 ' 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:27.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.236 --rc genhtml_branch_coverage=1 00:15:27.236 --rc genhtml_function_coverage=1 00:15:27.236 --rc genhtml_legend=1 00:15:27.236 --rc geninfo_all_blocks=1 00:15:27.236 --rc geninfo_unexecuted_blocks=1 00:15:27.236 00:15:27.236 ' 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:27.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.236 --rc genhtml_branch_coverage=1 00:15:27.236 --rc genhtml_function_coverage=1 00:15:27.236 --rc genhtml_legend=1 00:15:27.236 --rc geninfo_all_blocks=1 00:15:27.236 --rc geninfo_unexecuted_blocks=1 00:15:27.236 00:15:27.236 ' 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:27.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:27.236 --rc genhtml_branch_coverage=1 00:15:27.236 --rc genhtml_function_coverage=1 00:15:27.236 --rc genhtml_legend=1 00:15:27.236 --rc geninfo_all_blocks=1 00:15:27.236 --rc geninfo_unexecuted_blocks=1 00:15:27.236 00:15:27.236 ' 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:27.236 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:27.237 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:27.237 Cannot find device "nvmf_init_br" 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:27.237 Cannot find device "nvmf_init_br2" 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:15:27.237 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:27.497 Cannot find device "nvmf_tgt_br" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:27.497 Cannot find device "nvmf_tgt_br2" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:27.497 Cannot find device "nvmf_init_br" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:27.497 Cannot find device "nvmf_init_br2" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:27.497 Cannot find device "nvmf_tgt_br" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:27.497 Cannot find device "nvmf_tgt_br2" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:27.497 Cannot find device "nvmf_br" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:27.497 Cannot find device "nvmf_init_if" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:27.497 Cannot find device "nvmf_init_if2" 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:27.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:27.497 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:27.497 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:27.757 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:27.757 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.065 ms 00:15:27.757 00:15:27.757 --- 10.0.0.3 ping statistics --- 00:15:27.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.757 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:27.757 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:27.757 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:15:27.757 00:15:27.757 --- 10.0.0.4 ping statistics --- 00:15:27.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.757 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:27.757 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:27.757 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:15:27.757 00:15:27.757 --- 10.0.0.1 ping statistics --- 00:15:27.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.757 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:27.757 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:27.757 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:15:27.757 00:15:27.757 --- 10.0.0.2 ping statistics --- 00:15:27.757 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:27.757 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=75626 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 75626 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75626 ']' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:27.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:27.757 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:27.757 [2024-11-05 09:38:13.653482] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:15:27.757 [2024-11-05 09:38:13.654160] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.016 [2024-11-05 09:38:13.807401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.016 [2024-11-05 09:38:13.842210] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.016 [2024-11-05 09:38:13.842279] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.016 [2024-11-05 09:38:13.842290] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.016 [2024-11-05 09:38:13.842313] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.016 [2024-11-05 09:38:13.842320] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.016 [2024-11-05 09:38:13.842631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.016 [2024-11-05 09:38:13.875341] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.016 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.016 [2024-11-05 09:38:13.972679] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.276 [2024-11-05 09:38:13.980876] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.276 null0 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.276 09:38:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.276 null1 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=75645 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 75645 /tmp/host.sock 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 75645 ']' 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.276 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.276 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.276 [2024-11-05 09:38:14.103076] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:15:28.276 [2024-11-05 09:38:14.103198] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75645 ] 00:15:28.535 [2024-11-05 09:38:14.265653] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.535 [2024-11-05 09:38:14.305378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.535 [2024-11-05 09:38:14.339916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.535 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.793 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:15:28.793 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:15:28.793 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:28.794 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.053 [2024-11-05 09:38:14.765134] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:29.053 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:29.054 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.054 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.054 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:29.054 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:29.054 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:29.054 09:38:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.054 09:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:15:29.054 09:38:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:15:29.648 [2024-11-05 09:38:15.415935] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:29.648 [2024-11-05 09:38:15.415983] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:29.648 [2024-11-05 09:38:15.416011] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:29.648 [2024-11-05 09:38:15.421980] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:15:29.648 [2024-11-05 09:38:15.476382] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:15:29.648 [2024-11-05 09:38:15.477382] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x112de50:1 started. 00:15:29.648 [2024-11-05 09:38:15.479155] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:29.648 [2024-11-05 09:38:15.479187] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:29.648 [2024-11-05 09:38:15.484439] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x112de50 was disconnected and freed. delete nvme_qpair. 00:15:30.216 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.216 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:30.216 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:30.216 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.217 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.476 [2024-11-05 09:38:16.238140] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x113bf80:1 started. 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.476 [2024-11-05 09:38:16.244719] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x113bf80 was disconnected and freed. delete nvme_qpair. 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:15:30.476 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.477 [2024-11-05 09:38:16.355270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:30.477 [2024-11-05 09:38:16.356331] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:30.477 [2024-11-05 09:38:16.356369] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:30.477 [2024-11-05 09:38:16.362333] bdev_nvme.c:7306:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:30.477 [2024-11-05 09:38:16.420883] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:15:30.477 [2024-11-05 09:38:16.420957] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:30.477 [2024-11-05 09:38:16.420970] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:15:30.477 [2024-11-05 09:38:16.420976] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.477 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.736 [2024-11-05 09:38:16.588638] bdev_nvme.c:7364:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:15:30.736 [2024-11-05 09:38:16.588675] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:15:30.736 [2024-11-05 09:38:16.594630] bdev_nvme.c:7169:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:15:30.736 [2024-11-05 09:38:16.594662] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:30.736 [2024-11-05 09:38:16.594768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.736 [2024-11-05 09:38:16.594803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.736 [2024-11-05 09:38:16.594818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.736 [2024-11-05 09:38:16.594828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.736 [2024-11-05 09:38:16.594849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.736 [2024-11-05 09:38:16.594860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.736 [2024-11-05 09:38:16.594871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:15:30.736 [2024-11-05 09:38:16.594880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:30.736 [2024-11-05 09:38:16.594890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x110a230 is same with the state(6) to be set 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.736 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.737 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.737 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:30.737 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:30.737 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.995 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:30.996 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.254 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:15:31.254 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:15:31.254 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:15:31.254 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:15:31.254 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:31.254 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.254 09:38:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.189 [2024-11-05 09:38:17.996599] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:15:32.189 [2024-11-05 09:38:17.996629] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:15:32.189 [2024-11-05 09:38:17.996665] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:15:32.189 [2024-11-05 09:38:18.002641] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:15:32.189 [2024-11-05 09:38:18.061009] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:15:32.189 [2024-11-05 09:38:18.061741] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1102eb0:1 started. 00:15:32.189 [2024-11-05 09:38:18.063809] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:15:32.189 [2024-11-05 09:38:18.063865] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.189 [2024-11-05 09:38:18.065613] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1102eb0 was disconnected and freed. delete nvme_qpair. 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.189 request: 00:15:32.189 { 00:15:32.189 "name": "nvme", 00:15:32.189 "trtype": "tcp", 00:15:32.189 "traddr": "10.0.0.3", 00:15:32.189 "adrfam": "ipv4", 00:15:32.189 "trsvcid": "8009", 00:15:32.189 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:32.189 "wait_for_attach": true, 00:15:32.189 "method": "bdev_nvme_start_discovery", 00:15:32.189 "req_id": 1 00:15:32.189 } 00:15:32.189 Got JSON-RPC error response 00:15:32.189 response: 00:15:32.189 { 00:15:32.189 "code": -17, 00:15:32.189 "message": "File exists" 00:15:32.189 } 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.189 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.190 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 request: 00:15:32.449 { 00:15:32.449 "name": "nvme_second", 00:15:32.449 "trtype": "tcp", 00:15:32.449 "traddr": "10.0.0.3", 00:15:32.449 "adrfam": "ipv4", 00:15:32.449 "trsvcid": "8009", 00:15:32.449 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:32.449 "wait_for_attach": true, 00:15:32.449 "method": "bdev_nvme_start_discovery", 00:15:32.449 "req_id": 1 00:15:32.449 } 00:15:32.449 Got JSON-RPC error response 00:15:32.449 response: 00:15:32.449 { 00:15:32.449 "code": -17, 00:15:32.449 "message": "File exists" 00:15:32.449 } 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.449 09:38:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:33.385 [2024-11-05 09:38:19.336204] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:33.385 [2024-11-05 09:38:19.336273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112ee40 with addr=10.0.0.3, port=8010 00:15:33.385 [2024-11-05 09:38:19.336296] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:33.385 [2024-11-05 09:38:19.336306] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:33.385 [2024-11-05 09:38:19.336316] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:34.761 [2024-11-05 09:38:20.336217] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:15:34.761 [2024-11-05 09:38:20.336276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112ee40 with addr=10.0.0.3, port=8010 00:15:34.761 [2024-11-05 09:38:20.336296] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:15:34.761 [2024-11-05 09:38:20.336321] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:15:34.761 [2024-11-05 09:38:20.336329] bdev_nvme.c:7450:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:15:35.698 [2024-11-05 09:38:21.336063] bdev_nvme.c:7425:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:15:35.698 request: 00:15:35.698 { 00:15:35.698 "name": "nvme_second", 00:15:35.698 "trtype": "tcp", 00:15:35.698 "traddr": "10.0.0.3", 00:15:35.698 "adrfam": "ipv4", 00:15:35.698 "trsvcid": "8010", 00:15:35.698 "hostnqn": "nqn.2021-12.io.spdk:test", 00:15:35.698 "wait_for_attach": false, 00:15:35.698 "attach_timeout_ms": 3000, 00:15:35.698 "method": "bdev_nvme_start_discovery", 00:15:35.698 "req_id": 1 00:15:35.698 } 00:15:35.698 Got JSON-RPC error response 00:15:35.698 response: 00:15:35.698 { 00:15:35.698 "code": -110, 00:15:35.698 "message": "Connection timed out" 00:15:35.698 } 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 75645 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:35.698 rmmod nvme_tcp 00:15:35.698 rmmod nvme_fabrics 00:15:35.698 rmmod nvme_keyring 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 75626 ']' 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 75626 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 75626 ']' 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 75626 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75626 00:15:35.698 killing process with pid 75626 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75626' 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 75626 00:15:35.698 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 75626 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:35.958 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:15:36.215 00:15:36.215 real 0m9.023s 00:15:36.215 user 0m17.034s 00:15:36.215 sys 0m1.980s 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:36.215 09:38:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:15:36.215 ************************************ 00:15:36.215 END TEST nvmf_host_discovery 00:15:36.215 ************************************ 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:15:36.215 ************************************ 00:15:36.215 START TEST nvmf_host_multipath_status 00:15:36.215 ************************************ 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:15:36.215 * Looking for test storage... 00:15:36.215 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:15:36.215 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:15:36.474 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:36.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.475 --rc genhtml_branch_coverage=1 00:15:36.475 --rc genhtml_function_coverage=1 00:15:36.475 --rc genhtml_legend=1 00:15:36.475 --rc geninfo_all_blocks=1 00:15:36.475 --rc geninfo_unexecuted_blocks=1 00:15:36.475 00:15:36.475 ' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:36.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.475 --rc genhtml_branch_coverage=1 00:15:36.475 --rc genhtml_function_coverage=1 00:15:36.475 --rc genhtml_legend=1 00:15:36.475 --rc geninfo_all_blocks=1 00:15:36.475 --rc geninfo_unexecuted_blocks=1 00:15:36.475 00:15:36.475 ' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:36.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.475 --rc genhtml_branch_coverage=1 00:15:36.475 --rc genhtml_function_coverage=1 00:15:36.475 --rc genhtml_legend=1 00:15:36.475 --rc geninfo_all_blocks=1 00:15:36.475 --rc geninfo_unexecuted_blocks=1 00:15:36.475 00:15:36.475 ' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:36.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.475 --rc genhtml_branch_coverage=1 00:15:36.475 --rc genhtml_function_coverage=1 00:15:36.475 --rc genhtml_legend=1 00:15:36.475 --rc geninfo_all_blocks=1 00:15:36.475 --rc geninfo_unexecuted_blocks=1 00:15:36.475 00:15:36.475 ' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:36.475 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:15:36.475 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:36.476 Cannot find device "nvmf_init_br" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:36.476 Cannot find device "nvmf_init_br2" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:36.476 Cannot find device "nvmf_tgt_br" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:36.476 Cannot find device "nvmf_tgt_br2" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:36.476 Cannot find device "nvmf_init_br" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:36.476 Cannot find device "nvmf_init_br2" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:36.476 Cannot find device "nvmf_tgt_br" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:36.476 Cannot find device "nvmf_tgt_br2" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:36.476 Cannot find device "nvmf_br" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:36.476 Cannot find device "nvmf_init_if" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:36.476 Cannot find device "nvmf_init_if2" 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:36.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:36.476 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:36.476 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:36.735 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:36.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:36.736 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:15:36.736 00:15:36.736 --- 10.0.0.3 ping statistics --- 00:15:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.736 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:36.736 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:36.736 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:15:36.736 00:15:36.736 --- 10.0.0.4 ping statistics --- 00:15:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.736 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:36.736 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:36.736 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:15:36.736 00:15:36.736 --- 10.0.0.1 ping statistics --- 00:15:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.736 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:36.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:36.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:15:36.736 00:15:36.736 --- 10.0.0.2 ping statistics --- 00:15:36.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:36.736 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=76143 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 76143 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76143 ']' 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:36.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:36.736 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:36.995 [2024-11-05 09:38:22.708718] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:15:36.995 [2024-11-05 09:38:22.709296] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.995 [2024-11-05 09:38:22.856822] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:36.995 [2024-11-05 09:38:22.895116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.995 [2024-11-05 09:38:22.895196] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.995 [2024-11-05 09:38:22.895221] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.995 [2024-11-05 09:38:22.895231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.995 [2024-11-05 09:38:22.895240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.995 [2024-11-05 09:38:22.896189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.995 [2024-11-05 09:38:22.896205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.995 [2024-11-05 09:38:22.928950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.254 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:37.254 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:15:37.254 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:37.254 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:37.254 09:38:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:37.254 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:37.254 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=76143 00:15:37.254 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:37.513 [2024-11-05 09:38:23.318350] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:37.513 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:37.772 Malloc0 00:15:37.772 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:15:38.032 09:38:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:38.291 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:38.550 [2024-11-05 09:38:24.467116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:38.550 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:15:38.809 [2024-11-05 09:38:24.723269] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=76187 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 76187 /var/tmp/bdevperf.sock 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 76187 ']' 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:38.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:38.809 09:38:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:15:39.387 09:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:39.387 09:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:15:39.387 09:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:15:39.387 09:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:39.967 Nvme0n1 00:15:39.967 09:38:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:15:40.226 Nvme0n1 00:15:40.226 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:15:40.226 09:38:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:15:42.132 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:15:42.132 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:42.391 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:42.959 09:38:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:15:43.894 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:15:43.894 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:43.894 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:43.894 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.152 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.152 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:44.152 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.152 09:38:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:44.410 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:44.410 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:44.410 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:44.410 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:44.669 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:44.669 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:44.669 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:44.669 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.237 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.237 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:45.237 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.237 09:38:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:45.237 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.237 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:45.237 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:45.237 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:45.496 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:45.496 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:15:45.496 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:46.062 09:38:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:46.062 09:38:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.438 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:47.697 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.697 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:47.697 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.697 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:47.955 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:47.955 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:47.955 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:47.955 09:38:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:48.522 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.522 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:48.522 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:48.522 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:48.793 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:48.793 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:48.793 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:48.793 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:49.080 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:49.080 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:15:49.080 09:38:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:49.338 09:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:15:49.597 09:38:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:15:50.533 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:15:50.533 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:50.533 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:50.533 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.102 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.102 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:51.102 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.102 09:38:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:51.361 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:51.361 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:51.361 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:51.361 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.620 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.620 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:51.620 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:51.620 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.879 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:51.879 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:51.879 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:51.879 09:38:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:52.137 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.137 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:15:52.137 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:52.137 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:52.396 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:52.396 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:15:52.396 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:52.964 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:52.964 09:38:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:15:54.342 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:15:54.342 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:15:54.342 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.342 09:38:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:54.342 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:54.342 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:54.342 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:54.342 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:54.910 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:54.910 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:54.910 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:54.910 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.172 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.172 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:55.172 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.172 09:38:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:55.432 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.432 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:15:55.432 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.432 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:55.690 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:55.690 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:55.690 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:55.690 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:55.949 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:55.949 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:15:55.949 09:38:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:56.208 09:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:15:56.467 09:38:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:15:57.405 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:15:57.405 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:15:57.405 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.405 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:15:57.703 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:57.703 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:15:57.703 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:57.703 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:15:58.283 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:58.283 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:15:58.283 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.283 09:38:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:15:58.283 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.283 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:15:58.283 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.283 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:15:58.574 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:15:58.574 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:15:58.574 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:58.574 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:15:58.833 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:58.833 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:15:58.833 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:15:58.833 09:38:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:15:59.400 09:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:15:59.400 09:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:15:59.400 09:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:59.659 09:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:15:59.918 09:38:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:16:00.854 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:16:00.854 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:00.854 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:00.854 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:01.112 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:01.112 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:01.112 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.112 09:38:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:01.371 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.371 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:01.371 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:01.371 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.630 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.630 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:01.630 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:01.630 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:01.888 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:01.888 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:16:01.888 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:01.888 09:38:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.148 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:02.148 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:02.148 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:02.148 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:02.407 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:02.407 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:16:02.666 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:16:02.666 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:16:02.925 09:38:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:03.492 09:38:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:16:04.428 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:16:04.428 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:04.428 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.428 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:04.686 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.686 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:04.686 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.686 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:04.944 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:04.944 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:04.944 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:04.944 09:38:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:05.203 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.203 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:05.203 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.203 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:05.770 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:05.770 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:05.770 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:05.770 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:06.028 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.028 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:06.028 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:06.028 09:38:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:06.288 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:06.288 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:16:06.288 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:06.547 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:16:06.805 09:38:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:16:07.763 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:16:07.763 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:16:07.763 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:07.763 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.022 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:08.022 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:08.022 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.022 09:38:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:08.281 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.281 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:08.281 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.281 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:08.848 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:08.848 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:08.848 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:08.848 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:09.108 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.108 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:09.108 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.108 09:38:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:09.367 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.367 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:09.367 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:09.367 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:09.626 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:09.626 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:16:09.626 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:09.885 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:16:10.144 09:38:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:16:11.083 09:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:16:11.083 09:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:11.083 09:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.084 09:38:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:11.343 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.343 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:16:11.343 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:11.343 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:11.912 09:38:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:12.171 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.172 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:12.172 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.172 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:12.430 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.430 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:16:12.431 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:12.431 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:12.998 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:12.998 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:16:12.998 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:16:13.259 09:38:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:16:13.519 09:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:16:14.471 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:16:14.471 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:16:14.471 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.471 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:16:14.731 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:14.731 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:16:14.731 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.731 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:16:14.990 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:14.990 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:16:14.990 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:14.990 09:39:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:16:15.249 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.249 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:16:15.249 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:16:15.249 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.507 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.507 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:16:15.507 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:15.507 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:16:15.766 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:16:15.766 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:16:15.766 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:16:15.766 09:39:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 76187 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76187 ']' 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76187 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76187 00:16:16.336 killing process with pid 76187 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76187' 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76187 00:16:16.336 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76187 00:16:16.336 { 00:16:16.336 "results": [ 00:16:16.336 { 00:16:16.336 "job": "Nvme0n1", 00:16:16.336 "core_mask": "0x4", 00:16:16.336 "workload": "verify", 00:16:16.336 "status": "terminated", 00:16:16.336 "verify_range": { 00:16:16.336 "start": 0, 00:16:16.336 "length": 16384 00:16:16.336 }, 00:16:16.336 "queue_depth": 128, 00:16:16.336 "io_size": 4096, 00:16:16.336 "runtime": 35.916647, 00:16:16.336 "iops": 8274.296874092952, 00:16:16.336 "mibps": 32.32147216442559, 00:16:16.336 "io_failed": 0, 00:16:16.336 "io_timeout": 0, 00:16:16.336 "avg_latency_us": 15436.281544859568, 00:16:16.336 "min_latency_us": 837.8181818181819, 00:16:16.337 "max_latency_us": 4026531.84 00:16:16.337 } 00:16:16.337 ], 00:16:16.337 "core_count": 1 00:16:16.337 } 00:16:16.337 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 76187 00:16:16.337 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:16.337 [2024-11-05 09:38:24.790371] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:16:16.337 [2024-11-05 09:38:24.790466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76187 ] 00:16:16.337 [2024-11-05 09:38:24.937364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.337 [2024-11-05 09:38:24.979165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:16.337 [2024-11-05 09:38:25.013768] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:16.337 Running I/O for 90 seconds... 00:16:16.337 6804.00 IOPS, 26.58 MiB/s [2024-11-05T09:39:02.295Z] 6858.50 IOPS, 26.79 MiB/s [2024-11-05T09:39:02.295Z] 6876.33 IOPS, 26.86 MiB/s [2024-11-05T09:39:02.295Z] 6885.25 IOPS, 26.90 MiB/s [2024-11-05T09:39:02.295Z] 6865.00 IOPS, 26.82 MiB/s [2024-11-05T09:39:02.295Z] 6926.83 IOPS, 27.06 MiB/s [2024-11-05T09:39:02.295Z] 7157.86 IOPS, 27.96 MiB/s [2024-11-05T09:39:02.295Z] 7388.00 IOPS, 28.86 MiB/s [2024-11-05T09:39:02.295Z] 7559.11 IOPS, 29.53 MiB/s [2024-11-05T09:39:02.295Z] 7694.30 IOPS, 30.06 MiB/s [2024-11-05T09:39:02.295Z] 7813.27 IOPS, 30.52 MiB/s [2024-11-05T09:39:02.295Z] 7912.83 IOPS, 30.91 MiB/s [2024-11-05T09:39:02.295Z] 7998.31 IOPS, 31.24 MiB/s [2024-11-05T09:39:02.295Z] 8037.29 IOPS, 31.40 MiB/s [2024-11-05T09:39:02.295Z] 8107.80 IOPS, 31.67 MiB/s [2024-11-05T09:39:02.295Z] [2024-11-05 09:38:42.052227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:123912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:123920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:123928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:123936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:123944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:123952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:123960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:123968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.337 [2024-11-05 09:38:42.052581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:123336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:123352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.052979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.052996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:123416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:123432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:123472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:16:16.337 [2024-11-05 09:38:42.053425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:123480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.337 [2024-11-05 09:38:42.053455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.053493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.053529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.053564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.053599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.053651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:123976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:123984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:123992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:124008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:124016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.053962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:124032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.338 [2024-11-05 09:38:42.053988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:123552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:123568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:123632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.338 [2024-11-05 09:38:42.054527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:16:16.338 [2024-11-05 09:38:42.054548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.054578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:123648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.054626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:124040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:124048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:124080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:124104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.054981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.054996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.055029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.055064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:124136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.055097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:124144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.055138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:124152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.055173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.339 [2024-11-05 09:38:42.055207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:123672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:123720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.339 [2024-11-05 09:38:42.055545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:16:16.339 [2024-11-05 09:38:42.055572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.055588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:123744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.055622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.055656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.055689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.055723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:123776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.055757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.055796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.055830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.055887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.055922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.055973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.055993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:124216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:124280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.340 [2024-11-05 09:38:42.056368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.056403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:123792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.056438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.056481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.056533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.056569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:16.340 [2024-11-05 09:38:42.056590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.340 [2024-11-05 09:38:42.056607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:123848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:123888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.056953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.056979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.057005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.057665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:42.057692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.057725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.057742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.057770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.057785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.057811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.057829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.057872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:124320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.057887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.057929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.057948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.057977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.057994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.058023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.058038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:42.058082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:42.058101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:16.341 8131.56 IOPS, 31.76 MiB/s [2024-11-05T09:39:02.299Z] 7653.24 IOPS, 29.90 MiB/s [2024-11-05T09:39:02.299Z] 7228.06 IOPS, 28.23 MiB/s [2024-11-05T09:39:02.299Z] 6847.63 IOPS, 26.75 MiB/s [2024-11-05T09:39:02.299Z] 6545.20 IOPS, 25.57 MiB/s [2024-11-05T09:39:02.299Z] 6680.00 IOPS, 26.09 MiB/s [2024-11-05T09:39:02.299Z] 6796.36 IOPS, 26.55 MiB/s [2024-11-05T09:39:02.299Z] 6930.09 IOPS, 27.07 MiB/s [2024-11-05T09:39:02.299Z] 7149.38 IOPS, 27.93 MiB/s [2024-11-05T09:39:02.299Z] 7352.48 IOPS, 28.72 MiB/s [2024-11-05T09:39:02.299Z] 7516.46 IOPS, 29.36 MiB/s [2024-11-05T09:39:02.299Z] 7579.00 IOPS, 29.61 MiB/s [2024-11-05T09:39:02.299Z] 7636.57 IOPS, 29.83 MiB/s [2024-11-05T09:39:02.299Z] 7697.97 IOPS, 30.07 MiB/s [2024-11-05T09:39:02.299Z] 7770.90 IOPS, 30.36 MiB/s [2024-11-05T09:39:02.299Z] 7925.90 IOPS, 30.96 MiB/s [2024-11-05T09:39:02.299Z] 8060.50 IOPS, 31.49 MiB/s [2024-11-05T09:39:02.299Z] 8186.24 IOPS, 31.98 MiB/s [2024-11-05T09:39:02.299Z] [2024-11-05 09:38:59.217472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:59.217541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:59.217638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:59.217661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:59.217684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:70496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.341 [2024-11-05 09:38:59.217699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:59.217721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:71000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:59.217736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:59.217757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:59.217771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:16.341 [2024-11-05 09:38:59.217792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:71032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.341 [2024-11-05 09:38:59.217807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.217828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.217842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.217878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:70528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.217894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.217915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.217930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.217951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.217966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.217986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:70680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:70712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:70744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:70776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:70568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:70600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:70656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:70824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:70856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:70880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.342 [2024-11-05 09:38:59.218668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:16:16.342 [2024-11-05 09:38:59.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.342 [2024-11-05 09:38:59.218745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.218768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.218784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.218807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.218823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.218845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.218861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.218895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:70912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.218914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.218939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:70944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.218956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:70688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.220548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:70720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.220601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:70752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.220638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.220750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.220788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:70992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:16.343 [2024-11-05 09:38:59.220825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.220916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.220978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:16:16.343 [2024-11-05 09:38:59.221005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:16.343 [2024-11-05 09:38:59.221021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:16:16.343 8226.15 IOPS, 32.13 MiB/s [2024-11-05T09:39:02.301Z] 8253.29 IOPS, 32.24 MiB/s [2024-11-05T09:39:02.301Z] Received shutdown signal, test time was about 35.917586 seconds 00:16:16.343 00:16:16.343 Latency(us) 00:16:16.343 [2024-11-05T09:39:02.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.343 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:16.343 Verification LBA range: start 0x0 length 0x4000 00:16:16.343 Nvme0n1 : 35.92 8274.30 32.32 0.00 0.00 15436.28 837.82 4026531.84 00:16:16.343 [2024-11-05T09:39:02.301Z] =================================================================================================================== 00:16:16.343 [2024-11-05T09:39:02.301Z] Total : 8274.30 32.32 0.00 0.00 15436.28 837.82 4026531.84 00:16:16.343 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:16:16.602 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:16.603 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:16.603 rmmod nvme_tcp 00:16:16.603 rmmod nvme_fabrics 00:16:16.866 rmmod nvme_keyring 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 76143 ']' 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 76143 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 76143 ']' 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 76143 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76143 00:16:16.866 killing process with pid 76143 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76143' 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 76143 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 76143 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:16.866 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.152 09:39:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.152 09:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:17.152 09:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.152 09:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.152 09:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.152 09:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:16:17.152 ************************************ 00:16:17.152 END TEST nvmf_host_multipath_status 00:16:17.152 ************************************ 00:16:17.152 00:16:17.152 real 0m41.031s 00:16:17.152 user 2m13.517s 00:16:17.152 sys 0m12.158s 00:16:17.152 09:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:17.152 09:39:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:17.425 ************************************ 00:16:17.425 START TEST nvmf_discovery_remove_ifc 00:16:17.425 ************************************ 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:17.425 * Looking for test storage... 00:16:17.425 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.425 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:17.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.426 --rc genhtml_branch_coverage=1 00:16:17.426 --rc genhtml_function_coverage=1 00:16:17.426 --rc genhtml_legend=1 00:16:17.426 --rc geninfo_all_blocks=1 00:16:17.426 --rc geninfo_unexecuted_blocks=1 00:16:17.426 00:16:17.426 ' 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:17.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.426 --rc genhtml_branch_coverage=1 00:16:17.426 --rc genhtml_function_coverage=1 00:16:17.426 --rc genhtml_legend=1 00:16:17.426 --rc geninfo_all_blocks=1 00:16:17.426 --rc geninfo_unexecuted_blocks=1 00:16:17.426 00:16:17.426 ' 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:17.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.426 --rc genhtml_branch_coverage=1 00:16:17.426 --rc genhtml_function_coverage=1 00:16:17.426 --rc genhtml_legend=1 00:16:17.426 --rc geninfo_all_blocks=1 00:16:17.426 --rc geninfo_unexecuted_blocks=1 00:16:17.426 00:16:17.426 ' 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:17.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.426 --rc genhtml_branch_coverage=1 00:16:17.426 --rc genhtml_function_coverage=1 00:16:17.426 --rc genhtml_legend=1 00:16:17.426 --rc geninfo_all_blocks=1 00:16:17.426 --rc geninfo_unexecuted_blocks=1 00:16:17.426 00:16:17.426 ' 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.426 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:17.427 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:17.427 Cannot find device "nvmf_init_br" 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:17.427 Cannot find device "nvmf_init_br2" 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:17.427 Cannot find device "nvmf_tgt_br" 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:16:17.427 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:17.686 Cannot find device "nvmf_tgt_br2" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:17.686 Cannot find device "nvmf_init_br" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:17.686 Cannot find device "nvmf_init_br2" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:17.686 Cannot find device "nvmf_tgt_br" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:17.686 Cannot find device "nvmf_tgt_br2" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:17.686 Cannot find device "nvmf_br" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:17.686 Cannot find device "nvmf_init_if" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:17.686 Cannot find device "nvmf_init_if2" 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:17.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:17.686 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.686 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:17.687 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:17.687 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.687 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:17.946 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.946 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.094 ms 00:16:17.946 00:16:17.946 --- 10.0.0.3 ping statistics --- 00:16:17.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.946 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:17.946 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:17.946 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:16:17.946 00:16:17.946 --- 10.0.0.4 ping statistics --- 00:16:17.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.946 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:16:17.946 00:16:17.946 --- 10.0.0.1 ping statistics --- 00:16:17.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.946 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:17.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:16:17.946 00:16:17.946 --- 10.0.0.2 ping statistics --- 00:16:17.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.946 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=77054 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 77054 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77054 ']' 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:17.946 09:39:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:17.946 [2024-11-05 09:39:03.798496] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:16:17.946 [2024-11-05 09:39:03.798591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:18.205 [2024-11-05 09:39:03.948989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.205 [2024-11-05 09:39:03.981268] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:18.205 [2024-11-05 09:39:03.981320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:18.205 [2024-11-05 09:39:03.981331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:18.205 [2024-11-05 09:39:03.981339] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:18.205 [2024-11-05 09:39:03.981346] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:18.205 [2024-11-05 09:39:03.981679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.205 [2024-11-05 09:39:04.011936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.205 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.205 [2024-11-05 09:39:04.115182] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.205 [2024-11-05 09:39:04.123335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:16:18.205 null0 00:16:18.205 [2024-11-05 09:39:04.155288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=77073 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 77073 /tmp/host.sock 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 77073 ']' 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:16:18.464 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:18.464 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.464 [2024-11-05 09:39:04.236699] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:16:18.464 [2024-11-05 09:39:04.236803] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77073 ] 00:16:18.464 [2024-11-05 09:39:04.391519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.723 [2024-11-05 09:39:04.431614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:18.723 [2024-11-05 09:39:04.566285] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.723 09:39:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.661 [2024-11-05 09:39:05.608388] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:19.661 [2024-11-05 09:39:05.608443] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:19.661 [2024-11-05 09:39:05.608466] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:19.661 [2024-11-05 09:39:05.614472] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:16:19.920 [2024-11-05 09:39:05.668918] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:16:19.920 [2024-11-05 09:39:05.670104] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x7eafb0:1 started. 00:16:19.920 [2024-11-05 09:39:05.671818] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:19.920 [2024-11-05 09:39:05.671917] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:19.920 [2024-11-05 09:39:05.671943] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:19.920 [2024-11-05 09:39:05.671961] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:16:19.920 [2024-11-05 09:39:05.671989] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.920 [2024-11-05 09:39:05.677137] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x7eafb0 was disconnected and freed. delete nvme_qpair. 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:19.920 09:39:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:20.855 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:21.113 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.113 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:21.114 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.114 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:21.114 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:21.114 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:21.114 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.114 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:21.114 09:39:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:22.050 09:39:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:22.986 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:22.986 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.986 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:22.986 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:22.986 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:22.986 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:22.986 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:23.244 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.244 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:23.244 09:39:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:24.179 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:24.179 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.179 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.179 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:24.179 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:24.179 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:24.179 09:39:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:24.179 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.179 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:24.179 09:39:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:25.115 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.374 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:25.374 09:39:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:25.374 [2024-11-05 09:39:11.099551] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:25.374 [2024-11-05 09:39:11.099613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.374 [2024-11-05 09:39:11.099629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.374 [2024-11-05 09:39:11.099642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.374 [2024-11-05 09:39:11.099651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.374 [2024-11-05 09:39:11.099661] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.374 [2024-11-05 09:39:11.099670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.374 [2024-11-05 09:39:11.099680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.374 [2024-11-05 09:39:11.099705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.374 [2024-11-05 09:39:11.099715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:25.374 [2024-11-05 09:39:11.099723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:25.374 [2024-11-05 09:39:11.099732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c7240 is same with the state(6) to be set 00:16:25.374 [2024-11-05 09:39:11.109546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c7240 (9): Bad file descriptor 00:16:25.374 [2024-11-05 09:39:11.119563] bdev_nvme.c:2543:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:16:25.374 [2024-11-05 09:39:11.119620] bdev_nvme.c:2531:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:16:25.374 [2024-11-05 09:39:11.119646] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:25.374 [2024-11-05 09:39:11.119652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:25.374 [2024-11-05 09:39:11.119687] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:26.312 [2024-11-05 09:39:12.141960] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:26.312 [2024-11-05 09:39:12.142014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7c7240 with addr=10.0.0.3, port=4420 00:16:26.312 [2024-11-05 09:39:12.142032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7c7240 is same with the state(6) to be set 00:16:26.312 [2024-11-05 09:39:12.142073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7c7240 (9): Bad file descriptor 00:16:26.312 [2024-11-05 09:39:12.142565] bdev_nvme.c:3166:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:16:26.312 [2024-11-05 09:39:12.142620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:26.312 [2024-11-05 09:39:12.142641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:26.312 [2024-11-05 09:39:12.142660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:26.312 [2024-11-05 09:39:12.142679] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:26.312 [2024-11-05 09:39:12.142692] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:26.312 [2024-11-05 09:39:12.142701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:26.312 [2024-11-05 09:39:12.142718] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:16:26.312 [2024-11-05 09:39:12.142728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:26.312 09:39:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:27.296 [2024-11-05 09:39:13.142772] bdev_nvme.c:2515:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:16:27.296 [2024-11-05 09:39:13.142816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:16:27.296 [2024-11-05 09:39:13.142848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:16:27.296 [2024-11-05 09:39:13.142860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:16:27.296 [2024-11-05 09:39:13.142872] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:16:27.296 [2024-11-05 09:39:13.142882] bdev_nvme.c:2505:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:16:27.296 [2024-11-05 09:39:13.142888] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:16:27.296 [2024-11-05 09:39:13.142894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:16:27.296 [2024-11-05 09:39:13.142926] bdev_nvme.c:7133:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:16:27.296 [2024-11-05 09:39:13.142967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.296 [2024-11-05 09:39:13.142982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.296 [2024-11-05 09:39:13.142995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.296 [2024-11-05 09:39:13.143005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.296 [2024-11-05 09:39:13.143015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.296 [2024-11-05 09:39:13.143024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.296 [2024-11-05 09:39:13.143034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.296 [2024-11-05 09:39:13.143043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.296 [2024-11-05 09:39:13.143054] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:27.296 [2024-11-05 09:39:13.143063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:27.296 [2024-11-05 09:39:13.143073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:16:27.296 [2024-11-05 09:39:13.143256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x752a20 (9): Bad file descriptor 00:16:27.296 [2024-11-05 09:39:13.144270] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:27.296 [2024-11-05 09:39:13.144286] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:27.296 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.555 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:27.555 09:39:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:28.491 09:39:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:29.428 [2024-11-05 09:39:15.153477] bdev_nvme.c:7382:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:16:29.428 [2024-11-05 09:39:15.153521] bdev_nvme.c:7468:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:16:29.428 [2024-11-05 09:39:15.153554] bdev_nvme.c:7345:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:16:29.428 [2024-11-05 09:39:15.159516] bdev_nvme.c:7311:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:16:29.428 [2024-11-05 09:39:15.213958] bdev_nvme.c:5632:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:16:29.428 [2024-11-05 09:39:15.214734] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x7f3290:1 started. 00:16:29.428 [2024-11-05 09:39:15.215989] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:29.428 [2024-11-05 09:39:15.216049] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:29.428 [2024-11-05 09:39:15.216085] bdev_nvme.c:8178:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:29.428 [2024-11-05 09:39:15.216112] bdev_nvme.c:7201:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:16:29.428 [2024-11-05 09:39:15.216126] bdev_nvme.c:7160:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:16:29.428 [2024-11-05 09:39:15.221988] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x7f3290 was disconnected and freed. delete nvme_qpair. 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:29.428 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.687 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:29.687 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:29.687 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 77073 00:16:29.687 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77073 ']' 00:16:29.687 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77073 00:16:29.687 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:29.687 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77073 00:16:29.688 killing process with pid 77073 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77073' 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77073 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77073 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:29.688 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:29.946 rmmod nvme_tcp 00:16:29.946 rmmod nvme_fabrics 00:16:29.946 rmmod nvme_keyring 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 77054 ']' 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 77054 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 77054 ']' 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 77054 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 77054 00:16:29.946 killing process with pid 77054 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 77054' 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 77054 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 77054 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:29.946 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:30.205 09:39:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:16:30.205 00:16:30.205 real 0m13.012s 00:16:30.205 user 0m22.224s 00:16:30.205 sys 0m2.386s 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:30.205 09:39:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:16:30.205 ************************************ 00:16:30.205 END TEST nvmf_discovery_remove_ifc 00:16:30.205 ************************************ 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:30.465 ************************************ 00:16:30.465 START TEST nvmf_identify_kernel_target 00:16:30.465 ************************************ 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:16:30.465 * Looking for test storage... 00:16:30.465 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.465 --rc genhtml_branch_coverage=1 00:16:30.465 --rc genhtml_function_coverage=1 00:16:30.465 --rc genhtml_legend=1 00:16:30.465 --rc geninfo_all_blocks=1 00:16:30.465 --rc geninfo_unexecuted_blocks=1 00:16:30.465 00:16:30.465 ' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.465 --rc genhtml_branch_coverage=1 00:16:30.465 --rc genhtml_function_coverage=1 00:16:30.465 --rc genhtml_legend=1 00:16:30.465 --rc geninfo_all_blocks=1 00:16:30.465 --rc geninfo_unexecuted_blocks=1 00:16:30.465 00:16:30.465 ' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.465 --rc genhtml_branch_coverage=1 00:16:30.465 --rc genhtml_function_coverage=1 00:16:30.465 --rc genhtml_legend=1 00:16:30.465 --rc geninfo_all_blocks=1 00:16:30.465 --rc geninfo_unexecuted_blocks=1 00:16:30.465 00:16:30.465 ' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:30.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.465 --rc genhtml_branch_coverage=1 00:16:30.465 --rc genhtml_function_coverage=1 00:16:30.465 --rc genhtml_legend=1 00:16:30.465 --rc geninfo_all_blocks=1 00:16:30.465 --rc geninfo_unexecuted_blocks=1 00:16:30.465 00:16:30.465 ' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:16:30.465 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.466 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:30.466 Cannot find device "nvmf_init_br" 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:16:30.466 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:30.724 Cannot find device "nvmf_init_br2" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:30.724 Cannot find device "nvmf_tgt_br" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.724 Cannot find device "nvmf_tgt_br2" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:30.724 Cannot find device "nvmf_init_br" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:30.724 Cannot find device "nvmf_init_br2" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:30.724 Cannot find device "nvmf_tgt_br" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:30.724 Cannot find device "nvmf_tgt_br2" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:30.724 Cannot find device "nvmf_br" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:30.724 Cannot find device "nvmf_init_if" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:30.724 Cannot find device "nvmf_init_if2" 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:16:30.724 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.725 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:30.725 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.984 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.984 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:16:30.984 00:16:30.984 --- 10.0.0.3 ping statistics --- 00:16:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.984 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.984 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.984 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:16:30.984 00:16:30.984 --- 10.0.0.4 ping statistics --- 00:16:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.984 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:30.984 00:16:30.984 --- 10.0.0.1 ping statistics --- 00:16:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.984 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:30.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.071 ms 00:16:30.984 00:16:30.984 --- 10.0.0.2 ping statistics --- 00:16:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.984 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:30.984 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:30.985 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:16:30.985 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:30.985 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:30.985 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:30.985 09:39:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:31.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:31.243 Waiting for block devices as requested 00:16:31.501 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:31.501 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:31.501 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:31.759 No valid GPT data, bailing 00:16:31.759 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:31.760 No valid GPT data, bailing 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:31.760 No valid GPT data, bailing 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:31.760 No valid GPT data, bailing 00:16:31.760 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -a 10.0.0.1 -t tcp -s 4420 00:16:32.019 00:16:32.019 Discovery Log Number of Records 2, Generation counter 2 00:16:32.019 =====Discovery Log Entry 0====== 00:16:32.019 trtype: tcp 00:16:32.019 adrfam: ipv4 00:16:32.019 subtype: current discovery subsystem 00:16:32.019 treq: not specified, sq flow control disable supported 00:16:32.019 portid: 1 00:16:32.019 trsvcid: 4420 00:16:32.019 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:32.019 traddr: 10.0.0.1 00:16:32.019 eflags: none 00:16:32.019 sectype: none 00:16:32.019 =====Discovery Log Entry 1====== 00:16:32.019 trtype: tcp 00:16:32.019 adrfam: ipv4 00:16:32.019 subtype: nvme subsystem 00:16:32.019 treq: not specified, sq flow control disable supported 00:16:32.019 portid: 1 00:16:32.019 trsvcid: 4420 00:16:32.019 subnqn: nqn.2016-06.io.spdk:testnqn 00:16:32.019 traddr: 10.0.0.1 00:16:32.019 eflags: none 00:16:32.019 sectype: none 00:16:32.019 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:16:32.019 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:16:32.019 ===================================================== 00:16:32.019 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:16:32.019 ===================================================== 00:16:32.019 Controller Capabilities/Features 00:16:32.019 ================================ 00:16:32.019 Vendor ID: 0000 00:16:32.019 Subsystem Vendor ID: 0000 00:16:32.019 Serial Number: 9b86960ffe5d5267b634 00:16:32.019 Model Number: Linux 00:16:32.019 Firmware Version: 6.8.9-20 00:16:32.019 Recommended Arb Burst: 0 00:16:32.019 IEEE OUI Identifier: 00 00 00 00:16:32.019 Multi-path I/O 00:16:32.019 May have multiple subsystem ports: No 00:16:32.019 May have multiple controllers: No 00:16:32.019 Associated with SR-IOV VF: No 00:16:32.019 Max Data Transfer Size: Unlimited 00:16:32.019 Max Number of Namespaces: 0 00:16:32.019 Max Number of I/O Queues: 1024 00:16:32.019 NVMe Specification Version (VS): 1.3 00:16:32.019 NVMe Specification Version (Identify): 1.3 00:16:32.019 Maximum Queue Entries: 1024 00:16:32.019 Contiguous Queues Required: No 00:16:32.019 Arbitration Mechanisms Supported 00:16:32.019 Weighted Round Robin: Not Supported 00:16:32.019 Vendor Specific: Not Supported 00:16:32.019 Reset Timeout: 7500 ms 00:16:32.019 Doorbell Stride: 4 bytes 00:16:32.019 NVM Subsystem Reset: Not Supported 00:16:32.019 Command Sets Supported 00:16:32.019 NVM Command Set: Supported 00:16:32.019 Boot Partition: Not Supported 00:16:32.019 Memory Page Size Minimum: 4096 bytes 00:16:32.019 Memory Page Size Maximum: 4096 bytes 00:16:32.019 Persistent Memory Region: Not Supported 00:16:32.019 Optional Asynchronous Events Supported 00:16:32.019 Namespace Attribute Notices: Not Supported 00:16:32.019 Firmware Activation Notices: Not Supported 00:16:32.019 ANA Change Notices: Not Supported 00:16:32.019 PLE Aggregate Log Change Notices: Not Supported 00:16:32.019 LBA Status Info Alert Notices: Not Supported 00:16:32.020 EGE Aggregate Log Change Notices: Not Supported 00:16:32.020 Normal NVM Subsystem Shutdown event: Not Supported 00:16:32.020 Zone Descriptor Change Notices: Not Supported 00:16:32.020 Discovery Log Change Notices: Supported 00:16:32.020 Controller Attributes 00:16:32.020 128-bit Host Identifier: Not Supported 00:16:32.020 Non-Operational Permissive Mode: Not Supported 00:16:32.020 NVM Sets: Not Supported 00:16:32.020 Read Recovery Levels: Not Supported 00:16:32.020 Endurance Groups: Not Supported 00:16:32.020 Predictable Latency Mode: Not Supported 00:16:32.020 Traffic Based Keep ALive: Not Supported 00:16:32.020 Namespace Granularity: Not Supported 00:16:32.020 SQ Associations: Not Supported 00:16:32.020 UUID List: Not Supported 00:16:32.020 Multi-Domain Subsystem: Not Supported 00:16:32.020 Fixed Capacity Management: Not Supported 00:16:32.020 Variable Capacity Management: Not Supported 00:16:32.020 Delete Endurance Group: Not Supported 00:16:32.020 Delete NVM Set: Not Supported 00:16:32.020 Extended LBA Formats Supported: Not Supported 00:16:32.020 Flexible Data Placement Supported: Not Supported 00:16:32.020 00:16:32.020 Controller Memory Buffer Support 00:16:32.020 ================================ 00:16:32.020 Supported: No 00:16:32.020 00:16:32.020 Persistent Memory Region Support 00:16:32.020 ================================ 00:16:32.020 Supported: No 00:16:32.020 00:16:32.020 Admin Command Set Attributes 00:16:32.020 ============================ 00:16:32.020 Security Send/Receive: Not Supported 00:16:32.020 Format NVM: Not Supported 00:16:32.020 Firmware Activate/Download: Not Supported 00:16:32.020 Namespace Management: Not Supported 00:16:32.020 Device Self-Test: Not Supported 00:16:32.020 Directives: Not Supported 00:16:32.020 NVMe-MI: Not Supported 00:16:32.020 Virtualization Management: Not Supported 00:16:32.020 Doorbell Buffer Config: Not Supported 00:16:32.020 Get LBA Status Capability: Not Supported 00:16:32.020 Command & Feature Lockdown Capability: Not Supported 00:16:32.020 Abort Command Limit: 1 00:16:32.020 Async Event Request Limit: 1 00:16:32.020 Number of Firmware Slots: N/A 00:16:32.020 Firmware Slot 1 Read-Only: N/A 00:16:32.020 Firmware Activation Without Reset: N/A 00:16:32.020 Multiple Update Detection Support: N/A 00:16:32.020 Firmware Update Granularity: No Information Provided 00:16:32.020 Per-Namespace SMART Log: No 00:16:32.020 Asymmetric Namespace Access Log Page: Not Supported 00:16:32.020 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:16:32.020 Command Effects Log Page: Not Supported 00:16:32.020 Get Log Page Extended Data: Supported 00:16:32.020 Telemetry Log Pages: Not Supported 00:16:32.020 Persistent Event Log Pages: Not Supported 00:16:32.020 Supported Log Pages Log Page: May Support 00:16:32.020 Commands Supported & Effects Log Page: Not Supported 00:16:32.020 Feature Identifiers & Effects Log Page:May Support 00:16:32.020 NVMe-MI Commands & Effects Log Page: May Support 00:16:32.020 Data Area 4 for Telemetry Log: Not Supported 00:16:32.020 Error Log Page Entries Supported: 1 00:16:32.020 Keep Alive: Not Supported 00:16:32.020 00:16:32.020 NVM Command Set Attributes 00:16:32.020 ========================== 00:16:32.020 Submission Queue Entry Size 00:16:32.020 Max: 1 00:16:32.020 Min: 1 00:16:32.020 Completion Queue Entry Size 00:16:32.020 Max: 1 00:16:32.020 Min: 1 00:16:32.020 Number of Namespaces: 0 00:16:32.020 Compare Command: Not Supported 00:16:32.020 Write Uncorrectable Command: Not Supported 00:16:32.020 Dataset Management Command: Not Supported 00:16:32.020 Write Zeroes Command: Not Supported 00:16:32.020 Set Features Save Field: Not Supported 00:16:32.020 Reservations: Not Supported 00:16:32.020 Timestamp: Not Supported 00:16:32.020 Copy: Not Supported 00:16:32.020 Volatile Write Cache: Not Present 00:16:32.020 Atomic Write Unit (Normal): 1 00:16:32.020 Atomic Write Unit (PFail): 1 00:16:32.020 Atomic Compare & Write Unit: 1 00:16:32.020 Fused Compare & Write: Not Supported 00:16:32.020 Scatter-Gather List 00:16:32.020 SGL Command Set: Supported 00:16:32.020 SGL Keyed: Not Supported 00:16:32.020 SGL Bit Bucket Descriptor: Not Supported 00:16:32.020 SGL Metadata Pointer: Not Supported 00:16:32.020 Oversized SGL: Not Supported 00:16:32.020 SGL Metadata Address: Not Supported 00:16:32.020 SGL Offset: Supported 00:16:32.020 Transport SGL Data Block: Not Supported 00:16:32.020 Replay Protected Memory Block: Not Supported 00:16:32.020 00:16:32.020 Firmware Slot Information 00:16:32.020 ========================= 00:16:32.020 Active slot: 0 00:16:32.020 00:16:32.020 00:16:32.020 Error Log 00:16:32.020 ========= 00:16:32.020 00:16:32.020 Active Namespaces 00:16:32.020 ================= 00:16:32.020 Discovery Log Page 00:16:32.020 ================== 00:16:32.020 Generation Counter: 2 00:16:32.020 Number of Records: 2 00:16:32.020 Record Format: 0 00:16:32.020 00:16:32.020 Discovery Log Entry 0 00:16:32.020 ---------------------- 00:16:32.020 Transport Type: 3 (TCP) 00:16:32.020 Address Family: 1 (IPv4) 00:16:32.020 Subsystem Type: 3 (Current Discovery Subsystem) 00:16:32.020 Entry Flags: 00:16:32.020 Duplicate Returned Information: 0 00:16:32.020 Explicit Persistent Connection Support for Discovery: 0 00:16:32.020 Transport Requirements: 00:16:32.020 Secure Channel: Not Specified 00:16:32.020 Port ID: 1 (0x0001) 00:16:32.020 Controller ID: 65535 (0xffff) 00:16:32.020 Admin Max SQ Size: 32 00:16:32.020 Transport Service Identifier: 4420 00:16:32.020 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:16:32.020 Transport Address: 10.0.0.1 00:16:32.020 Discovery Log Entry 1 00:16:32.020 ---------------------- 00:16:32.020 Transport Type: 3 (TCP) 00:16:32.020 Address Family: 1 (IPv4) 00:16:32.020 Subsystem Type: 2 (NVM Subsystem) 00:16:32.020 Entry Flags: 00:16:32.020 Duplicate Returned Information: 0 00:16:32.020 Explicit Persistent Connection Support for Discovery: 0 00:16:32.020 Transport Requirements: 00:16:32.020 Secure Channel: Not Specified 00:16:32.020 Port ID: 1 (0x0001) 00:16:32.020 Controller ID: 65535 (0xffff) 00:16:32.020 Admin Max SQ Size: 32 00:16:32.020 Transport Service Identifier: 4420 00:16:32.020 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:16:32.020 Transport Address: 10.0.0.1 00:16:32.020 09:39:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:16:32.279 get_feature(0x01) failed 00:16:32.279 get_feature(0x02) failed 00:16:32.279 get_feature(0x04) failed 00:16:32.279 ===================================================== 00:16:32.279 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:16:32.279 ===================================================== 00:16:32.279 Controller Capabilities/Features 00:16:32.279 ================================ 00:16:32.279 Vendor ID: 0000 00:16:32.279 Subsystem Vendor ID: 0000 00:16:32.279 Serial Number: 1f1a4c41237a6328a88a 00:16:32.279 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:16:32.279 Firmware Version: 6.8.9-20 00:16:32.279 Recommended Arb Burst: 6 00:16:32.279 IEEE OUI Identifier: 00 00 00 00:16:32.279 Multi-path I/O 00:16:32.279 May have multiple subsystem ports: Yes 00:16:32.279 May have multiple controllers: Yes 00:16:32.279 Associated with SR-IOV VF: No 00:16:32.279 Max Data Transfer Size: Unlimited 00:16:32.279 Max Number of Namespaces: 1024 00:16:32.279 Max Number of I/O Queues: 128 00:16:32.279 NVMe Specification Version (VS): 1.3 00:16:32.279 NVMe Specification Version (Identify): 1.3 00:16:32.279 Maximum Queue Entries: 1024 00:16:32.279 Contiguous Queues Required: No 00:16:32.279 Arbitration Mechanisms Supported 00:16:32.279 Weighted Round Robin: Not Supported 00:16:32.279 Vendor Specific: Not Supported 00:16:32.279 Reset Timeout: 7500 ms 00:16:32.279 Doorbell Stride: 4 bytes 00:16:32.279 NVM Subsystem Reset: Not Supported 00:16:32.279 Command Sets Supported 00:16:32.279 NVM Command Set: Supported 00:16:32.279 Boot Partition: Not Supported 00:16:32.279 Memory Page Size Minimum: 4096 bytes 00:16:32.279 Memory Page Size Maximum: 4096 bytes 00:16:32.279 Persistent Memory Region: Not Supported 00:16:32.279 Optional Asynchronous Events Supported 00:16:32.279 Namespace Attribute Notices: Supported 00:16:32.279 Firmware Activation Notices: Not Supported 00:16:32.280 ANA Change Notices: Supported 00:16:32.280 PLE Aggregate Log Change Notices: Not Supported 00:16:32.280 LBA Status Info Alert Notices: Not Supported 00:16:32.280 EGE Aggregate Log Change Notices: Not Supported 00:16:32.280 Normal NVM Subsystem Shutdown event: Not Supported 00:16:32.280 Zone Descriptor Change Notices: Not Supported 00:16:32.280 Discovery Log Change Notices: Not Supported 00:16:32.280 Controller Attributes 00:16:32.280 128-bit Host Identifier: Supported 00:16:32.280 Non-Operational Permissive Mode: Not Supported 00:16:32.280 NVM Sets: Not Supported 00:16:32.280 Read Recovery Levels: Not Supported 00:16:32.280 Endurance Groups: Not Supported 00:16:32.280 Predictable Latency Mode: Not Supported 00:16:32.280 Traffic Based Keep ALive: Supported 00:16:32.280 Namespace Granularity: Not Supported 00:16:32.280 SQ Associations: Not Supported 00:16:32.280 UUID List: Not Supported 00:16:32.280 Multi-Domain Subsystem: Not Supported 00:16:32.280 Fixed Capacity Management: Not Supported 00:16:32.280 Variable Capacity Management: Not Supported 00:16:32.280 Delete Endurance Group: Not Supported 00:16:32.280 Delete NVM Set: Not Supported 00:16:32.280 Extended LBA Formats Supported: Not Supported 00:16:32.280 Flexible Data Placement Supported: Not Supported 00:16:32.280 00:16:32.280 Controller Memory Buffer Support 00:16:32.280 ================================ 00:16:32.280 Supported: No 00:16:32.280 00:16:32.280 Persistent Memory Region Support 00:16:32.280 ================================ 00:16:32.280 Supported: No 00:16:32.280 00:16:32.280 Admin Command Set Attributes 00:16:32.280 ============================ 00:16:32.280 Security Send/Receive: Not Supported 00:16:32.280 Format NVM: Not Supported 00:16:32.280 Firmware Activate/Download: Not Supported 00:16:32.280 Namespace Management: Not Supported 00:16:32.280 Device Self-Test: Not Supported 00:16:32.280 Directives: Not Supported 00:16:32.280 NVMe-MI: Not Supported 00:16:32.280 Virtualization Management: Not Supported 00:16:32.280 Doorbell Buffer Config: Not Supported 00:16:32.280 Get LBA Status Capability: Not Supported 00:16:32.280 Command & Feature Lockdown Capability: Not Supported 00:16:32.280 Abort Command Limit: 4 00:16:32.280 Async Event Request Limit: 4 00:16:32.280 Number of Firmware Slots: N/A 00:16:32.280 Firmware Slot 1 Read-Only: N/A 00:16:32.280 Firmware Activation Without Reset: N/A 00:16:32.280 Multiple Update Detection Support: N/A 00:16:32.280 Firmware Update Granularity: No Information Provided 00:16:32.280 Per-Namespace SMART Log: Yes 00:16:32.280 Asymmetric Namespace Access Log Page: Supported 00:16:32.280 ANA Transition Time : 10 sec 00:16:32.280 00:16:32.280 Asymmetric Namespace Access Capabilities 00:16:32.280 ANA Optimized State : Supported 00:16:32.280 ANA Non-Optimized State : Supported 00:16:32.280 ANA Inaccessible State : Supported 00:16:32.280 ANA Persistent Loss State : Supported 00:16:32.280 ANA Change State : Supported 00:16:32.280 ANAGRPID is not changed : No 00:16:32.280 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:16:32.280 00:16:32.280 ANA Group Identifier Maximum : 128 00:16:32.280 Number of ANA Group Identifiers : 128 00:16:32.280 Max Number of Allowed Namespaces : 1024 00:16:32.280 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:16:32.280 Command Effects Log Page: Supported 00:16:32.280 Get Log Page Extended Data: Supported 00:16:32.280 Telemetry Log Pages: Not Supported 00:16:32.280 Persistent Event Log Pages: Not Supported 00:16:32.280 Supported Log Pages Log Page: May Support 00:16:32.280 Commands Supported & Effects Log Page: Not Supported 00:16:32.280 Feature Identifiers & Effects Log Page:May Support 00:16:32.280 NVMe-MI Commands & Effects Log Page: May Support 00:16:32.280 Data Area 4 for Telemetry Log: Not Supported 00:16:32.280 Error Log Page Entries Supported: 128 00:16:32.280 Keep Alive: Supported 00:16:32.280 Keep Alive Granularity: 1000 ms 00:16:32.280 00:16:32.280 NVM Command Set Attributes 00:16:32.280 ========================== 00:16:32.280 Submission Queue Entry Size 00:16:32.280 Max: 64 00:16:32.280 Min: 64 00:16:32.280 Completion Queue Entry Size 00:16:32.280 Max: 16 00:16:32.280 Min: 16 00:16:32.280 Number of Namespaces: 1024 00:16:32.280 Compare Command: Not Supported 00:16:32.280 Write Uncorrectable Command: Not Supported 00:16:32.280 Dataset Management Command: Supported 00:16:32.280 Write Zeroes Command: Supported 00:16:32.280 Set Features Save Field: Not Supported 00:16:32.280 Reservations: Not Supported 00:16:32.280 Timestamp: Not Supported 00:16:32.280 Copy: Not Supported 00:16:32.280 Volatile Write Cache: Present 00:16:32.280 Atomic Write Unit (Normal): 1 00:16:32.280 Atomic Write Unit (PFail): 1 00:16:32.280 Atomic Compare & Write Unit: 1 00:16:32.280 Fused Compare & Write: Not Supported 00:16:32.280 Scatter-Gather List 00:16:32.280 SGL Command Set: Supported 00:16:32.280 SGL Keyed: Not Supported 00:16:32.280 SGL Bit Bucket Descriptor: Not Supported 00:16:32.280 SGL Metadata Pointer: Not Supported 00:16:32.280 Oversized SGL: Not Supported 00:16:32.280 SGL Metadata Address: Not Supported 00:16:32.280 SGL Offset: Supported 00:16:32.280 Transport SGL Data Block: Not Supported 00:16:32.280 Replay Protected Memory Block: Not Supported 00:16:32.280 00:16:32.280 Firmware Slot Information 00:16:32.280 ========================= 00:16:32.280 Active slot: 0 00:16:32.280 00:16:32.280 Asymmetric Namespace Access 00:16:32.280 =========================== 00:16:32.280 Change Count : 0 00:16:32.280 Number of ANA Group Descriptors : 1 00:16:32.280 ANA Group Descriptor : 0 00:16:32.280 ANA Group ID : 1 00:16:32.280 Number of NSID Values : 1 00:16:32.280 Change Count : 0 00:16:32.280 ANA State : 1 00:16:32.280 Namespace Identifier : 1 00:16:32.280 00:16:32.280 Commands Supported and Effects 00:16:32.280 ============================== 00:16:32.280 Admin Commands 00:16:32.280 -------------- 00:16:32.280 Get Log Page (02h): Supported 00:16:32.280 Identify (06h): Supported 00:16:32.280 Abort (08h): Supported 00:16:32.280 Set Features (09h): Supported 00:16:32.280 Get Features (0Ah): Supported 00:16:32.280 Asynchronous Event Request (0Ch): Supported 00:16:32.280 Keep Alive (18h): Supported 00:16:32.280 I/O Commands 00:16:32.280 ------------ 00:16:32.280 Flush (00h): Supported 00:16:32.280 Write (01h): Supported LBA-Change 00:16:32.280 Read (02h): Supported 00:16:32.280 Write Zeroes (08h): Supported LBA-Change 00:16:32.280 Dataset Management (09h): Supported 00:16:32.280 00:16:32.280 Error Log 00:16:32.280 ========= 00:16:32.280 Entry: 0 00:16:32.280 Error Count: 0x3 00:16:32.280 Submission Queue Id: 0x0 00:16:32.280 Command Id: 0x5 00:16:32.280 Phase Bit: 0 00:16:32.280 Status Code: 0x2 00:16:32.280 Status Code Type: 0x0 00:16:32.280 Do Not Retry: 1 00:16:32.280 Error Location: 0x28 00:16:32.280 LBA: 0x0 00:16:32.280 Namespace: 0x0 00:16:32.280 Vendor Log Page: 0x0 00:16:32.280 ----------- 00:16:32.280 Entry: 1 00:16:32.280 Error Count: 0x2 00:16:32.280 Submission Queue Id: 0x0 00:16:32.280 Command Id: 0x5 00:16:32.280 Phase Bit: 0 00:16:32.280 Status Code: 0x2 00:16:32.280 Status Code Type: 0x0 00:16:32.280 Do Not Retry: 1 00:16:32.280 Error Location: 0x28 00:16:32.280 LBA: 0x0 00:16:32.280 Namespace: 0x0 00:16:32.280 Vendor Log Page: 0x0 00:16:32.280 ----------- 00:16:32.280 Entry: 2 00:16:32.280 Error Count: 0x1 00:16:32.280 Submission Queue Id: 0x0 00:16:32.280 Command Id: 0x4 00:16:32.280 Phase Bit: 0 00:16:32.280 Status Code: 0x2 00:16:32.280 Status Code Type: 0x0 00:16:32.280 Do Not Retry: 1 00:16:32.280 Error Location: 0x28 00:16:32.280 LBA: 0x0 00:16:32.280 Namespace: 0x0 00:16:32.280 Vendor Log Page: 0x0 00:16:32.280 00:16:32.280 Number of Queues 00:16:32.280 ================ 00:16:32.280 Number of I/O Submission Queues: 128 00:16:32.280 Number of I/O Completion Queues: 128 00:16:32.280 00:16:32.280 ZNS Specific Controller Data 00:16:32.280 ============================ 00:16:32.280 Zone Append Size Limit: 0 00:16:32.280 00:16:32.280 00:16:32.280 Active Namespaces 00:16:32.280 ================= 00:16:32.280 get_feature(0x05) failed 00:16:32.280 Namespace ID:1 00:16:32.280 Command Set Identifier: NVM (00h) 00:16:32.280 Deallocate: Supported 00:16:32.280 Deallocated/Unwritten Error: Not Supported 00:16:32.280 Deallocated Read Value: Unknown 00:16:32.281 Deallocate in Write Zeroes: Not Supported 00:16:32.281 Deallocated Guard Field: 0xFFFF 00:16:32.281 Flush: Supported 00:16:32.281 Reservation: Not Supported 00:16:32.281 Namespace Sharing Capabilities: Multiple Controllers 00:16:32.281 Size (in LBAs): 1310720 (5GiB) 00:16:32.281 Capacity (in LBAs): 1310720 (5GiB) 00:16:32.281 Utilization (in LBAs): 1310720 (5GiB) 00:16:32.281 UUID: 4bce673b-baa0-41f3-a893-c333c4ac1714 00:16:32.281 Thin Provisioning: Not Supported 00:16:32.281 Per-NS Atomic Units: Yes 00:16:32.281 Atomic Boundary Size (Normal): 0 00:16:32.281 Atomic Boundary Size (PFail): 0 00:16:32.281 Atomic Boundary Offset: 0 00:16:32.281 NGUID/EUI64 Never Reused: No 00:16:32.281 ANA group ID: 1 00:16:32.281 Namespace Write Protected: No 00:16:32.281 Number of LBA Formats: 1 00:16:32.281 Current LBA Format: LBA Format #00 00:16:32.281 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:16:32.281 00:16:32.281 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:16:32.281 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:32.281 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:32.539 rmmod nvme_tcp 00:16:32.539 rmmod nvme_fabrics 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:32.539 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:16:32.798 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:16:33.056 09:39:18 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:33.623 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:33.623 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:33.880 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:33.880 00:16:33.880 real 0m3.462s 00:16:33.880 user 0m1.171s 00:16:33.880 sys 0m1.442s 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.880 ************************************ 00:16:33.880 END TEST nvmf_identify_kernel_target 00:16:33.880 ************************************ 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:16:33.880 ************************************ 00:16:33.880 START TEST nvmf_auth_host 00:16:33.880 ************************************ 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:16:33.880 * Looking for test storage... 00:16:33.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:16:33.880 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.139 --rc genhtml_branch_coverage=1 00:16:34.139 --rc genhtml_function_coverage=1 00:16:34.139 --rc genhtml_legend=1 00:16:34.139 --rc geninfo_all_blocks=1 00:16:34.139 --rc geninfo_unexecuted_blocks=1 00:16:34.139 00:16:34.139 ' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.139 --rc genhtml_branch_coverage=1 00:16:34.139 --rc genhtml_function_coverage=1 00:16:34.139 --rc genhtml_legend=1 00:16:34.139 --rc geninfo_all_blocks=1 00:16:34.139 --rc geninfo_unexecuted_blocks=1 00:16:34.139 00:16:34.139 ' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.139 --rc genhtml_branch_coverage=1 00:16:34.139 --rc genhtml_function_coverage=1 00:16:34.139 --rc genhtml_legend=1 00:16:34.139 --rc geninfo_all_blocks=1 00:16:34.139 --rc geninfo_unexecuted_blocks=1 00:16:34.139 00:16:34.139 ' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:34.139 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:34.139 --rc genhtml_branch_coverage=1 00:16:34.139 --rc genhtml_function_coverage=1 00:16:34.139 --rc genhtml_legend=1 00:16:34.139 --rc geninfo_all_blocks=1 00:16:34.139 --rc geninfo_unexecuted_blocks=1 00:16:34.139 00:16:34.139 ' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:34.139 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:34.140 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:34.140 Cannot find device "nvmf_init_br" 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:34.140 Cannot find device "nvmf_init_br2" 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:34.140 Cannot find device "nvmf_tgt_br" 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:34.140 Cannot find device "nvmf_tgt_br2" 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:34.140 Cannot find device "nvmf_init_br" 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:34.140 Cannot find device "nvmf_init_br2" 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:34.140 Cannot find device "nvmf_tgt_br" 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:16:34.140 09:39:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:34.140 Cannot find device "nvmf_tgt_br2" 00:16:34.140 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:16:34.140 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:34.140 Cannot find device "nvmf_br" 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:34.141 Cannot find device "nvmf_init_if" 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:34.141 Cannot find device "nvmf_init_if2" 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:34.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:34.141 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:34.141 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:34.399 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:34.400 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:34.400 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.088 ms 00:16:34.400 00:16:34.400 --- 10.0.0.3 ping statistics --- 00:16:34.400 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.400 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:16:34.400 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:34.657 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:34.657 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:16:34.657 00:16:34.657 --- 10.0.0.4 ping statistics --- 00:16:34.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.657 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:16:34.657 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:34.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:34.657 00:16:34.657 --- 10.0.0.1 ping statistics --- 00:16:34.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.657 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:34.657 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:34.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:16:34.657 00:16:34.657 --- 10.0.0.2 ping statistics --- 00:16:34.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.657 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:16:34.657 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=78073 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 78073 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78073 ']' 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:34.658 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=540b6808523ea81d87d9805fc8bef1c5 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.v07 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 540b6808523ea81d87d9805fc8bef1c5 0 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 540b6808523ea81d87d9805fc8bef1c5 0 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=540b6808523ea81d87d9805fc8bef1c5 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.v07 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.v07 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.v07 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=01b0dc0d216d2de2b5f56bd3e9bfefbdc23f286200f7ba8f84d8b37f31163d74 00:16:34.916 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.lm9 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 01b0dc0d216d2de2b5f56bd3e9bfefbdc23f286200f7ba8f84d8b37f31163d74 3 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 01b0dc0d216d2de2b5f56bd3e9bfefbdc23f286200f7ba8f84d8b37f31163d74 3 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=01b0dc0d216d2de2b5f56bd3e9bfefbdc23f286200f7ba8f84d8b37f31163d74 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:34.917 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.lm9 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.lm9 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.lm9 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b98e42c15fd096e447586fd0dcb6af2ca3cc10ddf31f6f98 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.fTk 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b98e42c15fd096e447586fd0dcb6af2ca3cc10ddf31f6f98 0 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b98e42c15fd096e447586fd0dcb6af2ca3cc10ddf31f6f98 0 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b98e42c15fd096e447586fd0dcb6af2ca3cc10ddf31f6f98 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.fTk 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.fTk 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fTk 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=98912773b62451c4ab621057305a12efe1ed615a00b9d439 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Saj 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 98912773b62451c4ab621057305a12efe1ed615a00b9d439 2 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 98912773b62451c4ab621057305a12efe1ed615a00b9d439 2 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=98912773b62451c4ab621057305a12efe1ed615a00b9d439 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Saj 00:16:35.176 09:39:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Saj 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Saj 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=60cb9cbb20d2908c22cc6ddd75311741 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jsM 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 60cb9cbb20d2908c22cc6ddd75311741 1 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 60cb9cbb20d2908c22cc6ddd75311741 1 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=60cb9cbb20d2908c22cc6ddd75311741 00:16:35.176 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jsM 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jsM 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jsM 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=6433e248fe21c6ff3514aea4d644eca8 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.OvU 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 6433e248fe21c6ff3514aea4d644eca8 1 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 6433e248fe21c6ff3514aea4d644eca8 1 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=6433e248fe21c6ff3514aea4d644eca8 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.OvU 00:16:35.177 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.OvU 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.OvU 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=98ef97c225d01cbca2b8ab6475814ee40e7c7d4b61b561e6 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Cpu 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 98ef97c225d01cbca2b8ab6475814ee40e7c7d4b61b561e6 2 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 98ef97c225d01cbca2b8ab6475814ee40e7c7d4b61b561e6 2 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=98ef97c225d01cbca2b8ab6475814ee40e7c7d4b61b561e6 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Cpu 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Cpu 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Cpu 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=d7042c42d6fbc5954e01dd593c141aeb 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.4am 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key d7042c42d6fbc5954e01dd593c141aeb 0 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 d7042c42d6fbc5954e01dd593c141aeb 0 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=d7042c42d6fbc5954e01dd593c141aeb 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.4am 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.4am 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.4am 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=363cda0d72c5cb4ae64fe6cb070fb1f8254bb500666ac87344b49af0f5bd187b 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.W0l 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 363cda0d72c5cb4ae64fe6cb070fb1f8254bb500666ac87344b49af0f5bd187b 3 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 363cda0d72c5cb4ae64fe6cb070fb1f8254bb500666ac87344b49af0f5bd187b 3 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=363cda0d72c5cb4ae64fe6cb070fb1f8254bb500666ac87344b49af0f5bd187b 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.W0l 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.W0l 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.W0l 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 78073 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 78073 ']' 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:35.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:35.440 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.v07 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.lm9 ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.lm9 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fTk 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Saj ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Saj 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jsM 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.OvU ]] 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.OvU 00:16:36.007 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Cpu 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.4am ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.4am 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.W0l 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:16:36.008 09:39:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:36.266 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.266 Waiting for block devices as requested 00:16:36.267 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:36.525 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:37.092 No valid GPT data, bailing 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:37.092 No valid GPT data, bailing 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:16:37.092 09:39:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:37.092 No valid GPT data, bailing 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:37.351 No valid GPT data, bailing 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -a 10.0.0.1 -t tcp -s 4420 00:16:37.351 00:16:37.351 Discovery Log Number of Records 2, Generation counter 2 00:16:37.351 =====Discovery Log Entry 0====== 00:16:37.351 trtype: tcp 00:16:37.351 adrfam: ipv4 00:16:37.351 subtype: current discovery subsystem 00:16:37.351 treq: not specified, sq flow control disable supported 00:16:37.351 portid: 1 00:16:37.351 trsvcid: 4420 00:16:37.351 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:37.351 traddr: 10.0.0.1 00:16:37.351 eflags: none 00:16:37.351 sectype: none 00:16:37.351 =====Discovery Log Entry 1====== 00:16:37.351 trtype: tcp 00:16:37.351 adrfam: ipv4 00:16:37.351 subtype: nvme subsystem 00:16:37.351 treq: not specified, sq flow control disable supported 00:16:37.351 portid: 1 00:16:37.351 trsvcid: 4420 00:16:37.351 subnqn: nqn.2024-02.io.spdk:cnode0 00:16:37.351 traddr: 10.0.0.1 00:16:37.351 eflags: none 00:16:37.351 sectype: none 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:37.351 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.610 nvme0n1 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:37.610 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.611 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.869 nvme0n1 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.869 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.870 nvme0n1 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:37.870 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.129 nvme0n1 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.129 09:39:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:38.129 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:38.130 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:38.130 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:38.130 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.130 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.388 nvme0n1 00:16:38.388 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.389 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.389 nvme0n1 00:16:38.647 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.647 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.647 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.647 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:38.648 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 nvme0n1 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:38.906 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.165 09:39:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.165 nvme0n1 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:39.165 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.166 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.424 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.425 nvme0n1 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.425 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 nvme0n1 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.684 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.943 nvme0n1 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:39.943 09:39:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.509 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.510 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.768 nvme0n1 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.768 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.769 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.027 nvme0n1 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.027 09:39:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.286 nvme0n1 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.286 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.545 nvme0n1 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.545 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:41.804 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.805 nvme0n1 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:41.805 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:42.063 09:39:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:43.979 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:43.980 nvme0n1 00:16:43.980 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.255 09:39:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.255 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.516 nvme0n1 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:44.516 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.517 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.084 nvme0n1 00:16:45.084 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.084 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.084 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.085 09:39:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 nvme0n1 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.344 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.912 nvme0n1 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.912 09:39:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.480 nvme0n1 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.480 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.048 nvme0n1 00:16:47.048 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.048 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.048 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.048 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.048 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.048 09:39:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.307 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.308 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.308 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.308 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:47.308 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.308 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.874 nvme0n1 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.875 09:39:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.442 nvme0n1 00:16:48.443 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.443 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:48.443 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:48.443 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.443 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.443 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.701 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.702 09:39:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.271 nvme0n1 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.271 nvme0n1 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.271 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.531 nvme0n1 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:49.531 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.532 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.791 nvme0n1 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:49.791 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:49.792 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.792 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.051 nvme0n1 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.051 nvme0n1 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.051 09:39:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.310 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.311 nvme0n1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.311 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.571 nvme0n1 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.571 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.830 nvme0n1 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:50.830 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.831 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 nvme0n1 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.090 09:39:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 nvme0n1 00:16:51.090 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.090 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.090 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.090 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.090 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.090 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.349 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.350 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.350 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.350 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.350 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.350 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.350 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.608 nvme0n1 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.609 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.868 nvme0n1 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:51.868 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.869 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.128 nvme0n1 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.128 09:39:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.397 nvme0n1 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.397 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.690 nvme0n1 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:52.690 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:52.691 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:52.691 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.691 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:52.949 nvme0n1 00:16:52.949 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.949 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:52.949 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:52.949 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.949 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.208 09:39:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.467 nvme0n1 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.467 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.035 nvme0n1 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.035 09:39:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.293 nvme0n1 00:16:54.293 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.293 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.293 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.293 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.293 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.293 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.552 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.811 nvme0n1 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:54.811 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:54.812 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.812 09:39:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.747 nvme0n1 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:55.747 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.748 09:39:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.315 nvme0n1 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.315 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.883 nvme0n1 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.883 09:39:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.450 nvme0n1 00:16:57.450 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.709 09:39:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.276 nvme0n1 00:16:58.276 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.276 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.276 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.276 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.276 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.276 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.536 nvme0n1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.536 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.796 nvme0n1 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:58.796 nvme0n1 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.796 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.055 nvme0n1 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.055 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.056 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.056 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.056 09:39:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.056 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 nvme0n1 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.315 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.574 nvme0n1 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.574 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.833 nvme0n1 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:16:59.833 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.834 nvme0n1 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:16:59.834 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.093 nvme0n1 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.093 09:39:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.093 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.353 nvme0n1 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.353 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.612 nvme0n1 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:00.612 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.613 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 nvme0n1 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:00.878 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:00.879 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:00.879 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:00.879 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:00.879 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.879 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.879 09:39:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.156 nvme0n1 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:01.156 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.157 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.416 nvme0n1 00:17:01.416 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.416 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.416 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.416 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.416 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.416 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.417 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.676 nvme0n1 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.676 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.935 09:39:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.195 nvme0n1 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.195 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.763 nvme0n1 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.764 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.023 nvme0n1 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.023 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:03.282 09:39:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.282 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.541 nvme0n1 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.541 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.119 nvme0n1 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTQwYjY4MDg1MjNlYTgxZDg3ZDk4MDVmYzhiZWYxYzVqscZA: 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDFiMGRjMGQyMTZkMmRlMmI1ZjU2YmQzZTliZmVmYmRjMjNmMjg2MjAwZjdiYThmODRkOGIzN2YzMTE2M2Q3NMKJSYA=: 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.119 09:39:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.686 nvme0n1 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.686 09:39:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.254 nvme0n1 00:17:05.254 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:05.512 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.513 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 nvme0n1 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OThlZjk3YzIyNWQwMWNiY2EyYjhhYjY0NzU4MTRlZTQwZTdjN2Q0YjYxYjU2MWU2O8gxdA==: 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: ]] 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDcwNDJjNDJkNmZiYzU5NTRlMDFkZDU5M2MxNDFhZWJ0vynT: 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.080 09:39:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.080 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.017 nvme0n1 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzYzY2RhMGQ3MmM1Y2I0YWU2NGZlNmNiMDcwZmIxZjgyNTRiYjUwMDY2NmFjODczNDRiNDlhZjBmNWJkMTg3YjycRQc=: 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.017 09:39:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.585 nvme0n1 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.585 request: 00:17:07.585 { 00:17:07.585 "name": "nvme0", 00:17:07.585 "trtype": "tcp", 00:17:07.585 "traddr": "10.0.0.1", 00:17:07.585 "adrfam": "ipv4", 00:17:07.585 "trsvcid": "4420", 00:17:07.585 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:07.585 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:07.585 "prchk_reftag": false, 00:17:07.585 "prchk_guard": false, 00:17:07.585 "hdgst": false, 00:17:07.585 "ddgst": false, 00:17:07.585 "allow_unrecognized_csi": false, 00:17:07.585 "method": "bdev_nvme_attach_controller", 00:17:07.585 "req_id": 1 00:17:07.585 } 00:17:07.585 Got JSON-RPC error response 00:17:07.585 response: 00:17:07.585 { 00:17:07.585 "code": -5, 00:17:07.585 "message": "Input/output error" 00:17:07.585 } 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:17:07.585 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.844 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.844 request: 00:17:07.844 { 00:17:07.844 "name": "nvme0", 00:17:07.844 "trtype": "tcp", 00:17:07.844 "traddr": "10.0.0.1", 00:17:07.844 "adrfam": "ipv4", 00:17:07.844 "trsvcid": "4420", 00:17:07.844 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:07.844 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:07.844 "prchk_reftag": false, 00:17:07.844 "prchk_guard": false, 00:17:07.845 "hdgst": false, 00:17:07.845 "ddgst": false, 00:17:07.845 "dhchap_key": "key2", 00:17:07.845 "allow_unrecognized_csi": false, 00:17:07.845 "method": "bdev_nvme_attach_controller", 00:17:07.845 "req_id": 1 00:17:07.845 } 00:17:07.845 Got JSON-RPC error response 00:17:07.845 response: 00:17:07.845 { 00:17:07.845 "code": -5, 00:17:07.845 "message": "Input/output error" 00:17:07.845 } 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:07.845 request: 00:17:07.845 { 00:17:07.845 "name": "nvme0", 00:17:07.845 "trtype": "tcp", 00:17:07.845 "traddr": "10.0.0.1", 00:17:07.845 "adrfam": "ipv4", 00:17:07.845 "trsvcid": "4420", 00:17:07.845 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:17:07.845 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:17:07.845 "prchk_reftag": false, 00:17:07.845 "prchk_guard": false, 00:17:07.845 "hdgst": false, 00:17:07.845 "ddgst": false, 00:17:07.845 "dhchap_key": "key1", 00:17:07.845 "dhchap_ctrlr_key": "ckey2", 00:17:07.845 "allow_unrecognized_csi": false, 00:17:07.845 "method": "bdev_nvme_attach_controller", 00:17:07.845 "req_id": 1 00:17:07.845 } 00:17:07.845 Got JSON-RPC error response 00:17:07.845 response: 00:17:07.845 { 00:17:07.845 "code": -5, 00:17:07.845 "message": "Input/output error" 00:17:07.845 } 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.845 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.105 nvme0n1 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.105 request: 00:17:08.105 { 00:17:08.105 "name": "nvme0", 00:17:08.105 "dhchap_key": "key1", 00:17:08.105 "dhchap_ctrlr_key": "ckey2", 00:17:08.105 "method": "bdev_nvme_set_keys", 00:17:08.105 "req_id": 1 00:17:08.105 } 00:17:08.105 Got JSON-RPC error response 00:17:08.105 response: 00:17:08.105 { 00:17:08.105 "code": -13, 00:17:08.105 "message": "Permission denied" 00:17:08.105 } 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:08.105 09:39:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.105 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:17:08.105 09:39:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk4ZTQyYzE1ZmQwOTZlNDQ3NTg2ZmQwZGNiNmFmMmNhM2NjMTBkZGYzMWY2Zjk4+6XXvw==: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: ]] 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTg5MTI3NzNiNjI0NTFjNGFiNjIxMDU3MzA1YTEyZWZlMWVkNjE1YTAwYjlkNDM53qnxog==: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.482 nvme0n1 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjBjYjljYmIyMGQyOTA4YzIyY2M2ZGRkNzUzMTE3NDF4iIF5: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: ]] 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NjQzM2UyNDhmZTIxYzZmZjM1MTRhZWE0ZDY0NGVjYTjO5ujP: 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.482 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.483 request: 00:17:09.483 { 00:17:09.483 "name": "nvme0", 00:17:09.483 "dhchap_key": "key2", 00:17:09.483 "dhchap_ctrlr_key": "ckey1", 00:17:09.483 "method": "bdev_nvme_set_keys", 00:17:09.483 "req_id": 1 00:17:09.483 } 00:17:09.483 Got JSON-RPC error response 00:17:09.483 response: 00:17:09.483 { 00:17:09.483 "code": -13, 00:17:09.483 "message": "Permission denied" 00:17:09.483 } 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:17:09.483 09:39:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:10.427 rmmod nvme_tcp 00:17:10.427 rmmod nvme_fabrics 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 78073 ']' 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 78073 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 78073 ']' 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 78073 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:10.427 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 78073 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:10.714 killing process with pid 78073 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 78073' 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 78073 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 78073 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:10.714 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.973 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:17:10.974 09:39:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:11.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:11.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:17:11.818 09:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.v07 /tmp/spdk.key-null.fTk /tmp/spdk.key-sha256.jsM /tmp/spdk.key-sha384.Cpu /tmp/spdk.key-sha512.W0l /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:17:11.818 09:39:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:12.077 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:12.077 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:12.077 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:12.336 00:17:12.336 real 0m38.374s 00:17:12.336 user 0m34.394s 00:17:12.336 sys 0m3.804s 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.336 ************************************ 00:17:12.336 END TEST nvmf_auth_host 00:17:12.336 ************************************ 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:12.336 ************************************ 00:17:12.336 START TEST nvmf_digest 00:17:12.336 ************************************ 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:17:12.336 * Looking for test storage... 00:17:12.336 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:17:12.336 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.595 --rc genhtml_branch_coverage=1 00:17:12.595 --rc genhtml_function_coverage=1 00:17:12.595 --rc genhtml_legend=1 00:17:12.595 --rc geninfo_all_blocks=1 00:17:12.595 --rc geninfo_unexecuted_blocks=1 00:17:12.595 00:17:12.595 ' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.595 --rc genhtml_branch_coverage=1 00:17:12.595 --rc genhtml_function_coverage=1 00:17:12.595 --rc genhtml_legend=1 00:17:12.595 --rc geninfo_all_blocks=1 00:17:12.595 --rc geninfo_unexecuted_blocks=1 00:17:12.595 00:17:12.595 ' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.595 --rc genhtml_branch_coverage=1 00:17:12.595 --rc genhtml_function_coverage=1 00:17:12.595 --rc genhtml_legend=1 00:17:12.595 --rc geninfo_all_blocks=1 00:17:12.595 --rc geninfo_unexecuted_blocks=1 00:17:12.595 00:17:12.595 ' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:12.595 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.595 --rc genhtml_branch_coverage=1 00:17:12.595 --rc genhtml_function_coverage=1 00:17:12.595 --rc genhtml_legend=1 00:17:12.595 --rc geninfo_all_blocks=1 00:17:12.595 --rc geninfo_unexecuted_blocks=1 00:17:12.595 00:17:12.595 ' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:12.595 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:17:12.595 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:12.596 Cannot find device "nvmf_init_br" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:12.596 Cannot find device "nvmf_init_br2" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:12.596 Cannot find device "nvmf_tgt_br" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:12.596 Cannot find device "nvmf_tgt_br2" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:12.596 Cannot find device "nvmf_init_br" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:12.596 Cannot find device "nvmf_init_br2" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:12.596 Cannot find device "nvmf_tgt_br" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:12.596 Cannot find device "nvmf_tgt_br2" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:12.596 Cannot find device "nvmf_br" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:12.596 Cannot find device "nvmf_init_if" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:12.596 Cannot find device "nvmf_init_if2" 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:12.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:12.596 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:12.596 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:12.855 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:12.855 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:17:12.855 00:17:12.855 --- 10.0.0.3 ping statistics --- 00:17:12.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.855 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:12.855 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:12.855 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.086 ms 00:17:12.855 00:17:12.855 --- 10.0.0.4 ping statistics --- 00:17:12.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.855 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:12.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:12.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:17:12.855 00:17:12.855 --- 10.0.0.1 ping statistics --- 00:17:12.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.855 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:12.855 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:12.855 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:17:12.855 00:17:12.855 --- 10.0.0.2 ping statistics --- 00:17:12.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:12.855 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:12.855 ************************************ 00:17:12.855 START TEST nvmf_digest_clean 00:17:12.855 ************************************ 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=79724 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 79724 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79724 ']' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:12.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:12.855 09:39:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:12.855 [2024-11-05 09:39:58.799470] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:12.855 [2024-11-05 09:39:58.799561] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.113 [2024-11-05 09:39:58.952711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.113 [2024-11-05 09:39:58.995077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.113 [2024-11-05 09:39:58.995159] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.113 [2024-11-05 09:39:58.995173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.113 [2024-11-05 09:39:58.995183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.113 [2024-11-05 09:39:58.995192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.113 [2024-11-05 09:39:58.995610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.372 [2024-11-05 09:39:59.205714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:13.372 null0 00:17:13.372 [2024-11-05 09:39:59.243602] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:13.372 [2024-11-05 09:39:59.267708] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79743 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79743 /var/tmp/bperf.sock 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79743 ']' 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:13.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:13.372 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:13.631 [2024-11-05 09:39:59.336684] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:13.631 [2024-11-05 09:39:59.336825] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79743 ] 00:17:13.631 [2024-11-05 09:39:59.490219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.631 [2024-11-05 09:39:59.529097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.631 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:13.631 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:13.631 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:13.631 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:13.631 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:13.899 [2024-11-05 09:39:59.846508] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:14.157 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.157 09:39:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.416 nvme0n1 00:17:14.416 09:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:14.416 09:40:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:14.674 Running I/O for 2 seconds... 00:17:16.545 14351.00 IOPS, 56.06 MiB/s [2024-11-05T09:40:02.503Z] 14478.00 IOPS, 56.55 MiB/s 00:17:16.545 Latency(us) 00:17:16.545 [2024-11-05T09:40:02.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.545 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:16.545 nvme0n1 : 2.01 14466.56 56.51 0.00 0.00 8841.51 8043.05 24069.59 00:17:16.545 [2024-11-05T09:40:02.503Z] =================================================================================================================== 00:17:16.545 [2024-11-05T09:40:02.503Z] Total : 14466.56 56.51 0.00 0.00 8841.51 8043.05 24069.59 00:17:16.545 { 00:17:16.545 "results": [ 00:17:16.545 { 00:17:16.545 "job": "nvme0n1", 00:17:16.545 "core_mask": "0x2", 00:17:16.545 "workload": "randread", 00:17:16.545 "status": "finished", 00:17:16.545 "queue_depth": 128, 00:17:16.545 "io_size": 4096, 00:17:16.545 "runtime": 2.01043, 00:17:16.545 "iops": 14466.556905736583, 00:17:16.545 "mibps": 56.50998791303353, 00:17:16.545 "io_failed": 0, 00:17:16.545 "io_timeout": 0, 00:17:16.545 "avg_latency_us": 8841.50849826834, 00:17:16.545 "min_latency_us": 8043.054545454545, 00:17:16.545 "max_latency_us": 24069.585454545453 00:17:16.545 } 00:17:16.545 ], 00:17:16.545 "core_count": 1 00:17:16.545 } 00:17:16.545 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:16.545 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:16.545 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:16.545 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:16.545 | select(.opcode=="crc32c") 00:17:16.545 | "\(.module_name) \(.executed)"' 00:17:16.545 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79743 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79743 ']' 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79743 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79743 00:17:16.805 killing process with pid 79743 00:17:16.805 Received shutdown signal, test time was about 2.000000 seconds 00:17:16.805 00:17:16.805 Latency(us) 00:17:16.805 [2024-11-05T09:40:02.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.805 [2024-11-05T09:40:02.763Z] =================================================================================================================== 00:17:16.805 [2024-11-05T09:40:02.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79743' 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79743 00:17:16.805 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79743 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79796 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79796 /var/tmp/bperf.sock 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79796 ']' 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:17.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:17.064 09:40:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:17.064 [2024-11-05 09:40:02.938814] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:17.064 [2024-11-05 09:40:02.938977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79796 ] 00:17:17.064 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:17.064 Zero copy mechanism will not be used. 00:17:17.322 [2024-11-05 09:40:03.087587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.322 [2024-11-05 09:40:03.119590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.322 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:17.322 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:17.322 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:17.322 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:17.322 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:17.581 [2024-11-05 09:40:03.509057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:17.840 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:17.840 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:18.099 nvme0n1 00:17:18.099 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:18.099 09:40:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:18.099 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:18.099 Zero copy mechanism will not be used. 00:17:18.099 Running I/O for 2 seconds... 00:17:20.090 7408.00 IOPS, 926.00 MiB/s [2024-11-05T09:40:06.048Z] 7208.00 IOPS, 901.00 MiB/s 00:17:20.090 Latency(us) 00:17:20.090 [2024-11-05T09:40:06.048Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.090 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:20.090 nvme0n1 : 2.00 7205.83 900.73 0.00 0.00 2216.57 1980.97 4944.99 00:17:20.090 [2024-11-05T09:40:06.048Z] =================================================================================================================== 00:17:20.090 [2024-11-05T09:40:06.048Z] Total : 7205.83 900.73 0.00 0.00 2216.57 1980.97 4944.99 00:17:20.090 { 00:17:20.090 "results": [ 00:17:20.090 { 00:17:20.090 "job": "nvme0n1", 00:17:20.090 "core_mask": "0x2", 00:17:20.090 "workload": "randread", 00:17:20.090 "status": "finished", 00:17:20.090 "queue_depth": 16, 00:17:20.090 "io_size": 131072, 00:17:20.090 "runtime": 2.002823, 00:17:20.090 "iops": 7205.828972405449, 00:17:20.090 "mibps": 900.7286215506812, 00:17:20.090 "io_failed": 0, 00:17:20.090 "io_timeout": 0, 00:17:20.090 "avg_latency_us": 2216.5667889538395, 00:17:20.090 "min_latency_us": 1980.9745454545455, 00:17:20.090 "max_latency_us": 4944.989090909091 00:17:20.090 } 00:17:20.090 ], 00:17:20.090 "core_count": 1 00:17:20.090 } 00:17:20.090 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:20.090 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:20.090 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:20.090 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:20.090 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:20.090 | select(.opcode=="crc32c") 00:17:20.090 | "\(.module_name) \(.executed)"' 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79796 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79796 ']' 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79796 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79796 00:17:20.658 killing process with pid 79796 00:17:20.658 Received shutdown signal, test time was about 2.000000 seconds 00:17:20.658 00:17:20.658 Latency(us) 00:17:20.658 [2024-11-05T09:40:06.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.658 [2024-11-05T09:40:06.616Z] =================================================================================================================== 00:17:20.658 [2024-11-05T09:40:06.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79796' 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79796 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79796 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79843 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79843 /var/tmp/bperf.sock 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79843 ']' 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:20.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:20.658 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:20.658 [2024-11-05 09:40:06.578632] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:20.658 [2024-11-05 09:40:06.578744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79843 ] 00:17:20.917 [2024-11-05 09:40:06.728610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.917 [2024-11-05 09:40:06.761515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:20.917 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:20.917 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:20.917 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:20.917 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:20.917 09:40:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:21.175 [2024-11-05 09:40:07.105150] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:21.434 09:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:21.434 09:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:21.693 nvme0n1 00:17:21.693 09:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:21.693 09:40:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:21.952 Running I/O for 2 seconds... 00:17:23.822 15114.00 IOPS, 59.04 MiB/s [2024-11-05T09:40:09.780Z] 15240.50 IOPS, 59.53 MiB/s 00:17:23.822 Latency(us) 00:17:23.822 [2024-11-05T09:40:09.780Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:23.822 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:23.822 nvme0n1 : 2.01 15265.71 59.63 0.00 0.00 8378.22 7626.01 16920.20 00:17:23.822 [2024-11-05T09:40:09.780Z] =================================================================================================================== 00:17:23.822 [2024-11-05T09:40:09.780Z] Total : 15265.71 59.63 0.00 0.00 8378.22 7626.01 16920.20 00:17:23.822 { 00:17:23.822 "results": [ 00:17:23.822 { 00:17:23.822 "job": "nvme0n1", 00:17:23.822 "core_mask": "0x2", 00:17:23.822 "workload": "randwrite", 00:17:23.822 "status": "finished", 00:17:23.822 "queue_depth": 128, 00:17:23.822 "io_size": 4096, 00:17:23.822 "runtime": 2.005082, 00:17:23.822 "iops": 15265.70983131862, 00:17:23.822 "mibps": 59.63167902858836, 00:17:23.822 "io_failed": 0, 00:17:23.822 "io_timeout": 0, 00:17:23.822 "avg_latency_us": 8378.21563734968, 00:17:23.822 "min_latency_us": 7626.007272727273, 00:17:23.822 "max_latency_us": 16920.203636363636 00:17:23.822 } 00:17:23.822 ], 00:17:23.822 "core_count": 1 00:17:23.822 } 00:17:23.822 09:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:23.822 09:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:23.822 09:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:23.822 09:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:23.822 | select(.opcode=="crc32c") 00:17:23.822 | "\(.module_name) \(.executed)"' 00:17:23.822 09:40:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79843 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79843 ']' 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79843 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79843 00:17:24.389 killing process with pid 79843 00:17:24.389 Received shutdown signal, test time was about 2.000000 seconds 00:17:24.389 00:17:24.389 Latency(us) 00:17:24.389 [2024-11-05T09:40:10.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:24.389 [2024-11-05T09:40:10.347Z] =================================================================================================================== 00:17:24.389 [2024-11-05T09:40:10.347Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79843' 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79843 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79843 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=79898 00:17:24.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 79898 /var/tmp/bperf.sock 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 79898 ']' 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:24.389 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:24.389 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:24.389 Zero copy mechanism will not be used. 00:17:24.390 [2024-11-05 09:40:10.254304] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:24.390 [2024-11-05 09:40:10.254389] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79898 ] 00:17:24.648 [2024-11-05 09:40:10.400379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.648 [2024-11-05 09:40:10.432901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:24.648 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:24.648 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:17:24.648 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:17:24.648 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:17:24.648 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:17:25.214 [2024-11-05 09:40:10.880084] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:25.214 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.214 09:40:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:25.472 nvme0n1 00:17:25.472 09:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:17:25.472 09:40:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:25.729 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:25.729 Zero copy mechanism will not be used. 00:17:25.729 Running I/O for 2 seconds... 00:17:27.598 5686.00 IOPS, 710.75 MiB/s [2024-11-05T09:40:13.556Z] 5831.50 IOPS, 728.94 MiB/s 00:17:27.598 Latency(us) 00:17:27.598 [2024-11-05T09:40:13.556Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.598 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:27.598 nvme0n1 : 2.00 5830.66 728.83 0.00 0.00 2737.49 1660.74 10664.49 00:17:27.598 [2024-11-05T09:40:13.556Z] =================================================================================================================== 00:17:27.598 [2024-11-05T09:40:13.556Z] Total : 5830.66 728.83 0.00 0.00 2737.49 1660.74 10664.49 00:17:27.598 { 00:17:27.598 "results": [ 00:17:27.598 { 00:17:27.598 "job": "nvme0n1", 00:17:27.598 "core_mask": "0x2", 00:17:27.598 "workload": "randwrite", 00:17:27.598 "status": "finished", 00:17:27.598 "queue_depth": 16, 00:17:27.598 "io_size": 131072, 00:17:27.598 "runtime": 2.004234, 00:17:27.598 "iops": 5830.6565001890995, 00:17:27.598 "mibps": 728.8320625236374, 00:17:27.598 "io_failed": 0, 00:17:27.598 "io_timeout": 0, 00:17:27.598 "avg_latency_us": 2737.4899344981563, 00:17:27.598 "min_latency_us": 1660.7418181818182, 00:17:27.598 "max_latency_us": 10664.494545454545 00:17:27.598 } 00:17:27.598 ], 00:17:27.598 "core_count": 1 00:17:27.598 } 00:17:27.598 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:17:27.598 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:17:27.598 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:27.598 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:27.598 | select(.opcode=="crc32c") 00:17:27.598 | "\(.module_name) \(.executed)"' 00:17:27.598 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 79898 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79898 ']' 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79898 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:27.857 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79898 00:17:28.115 killing process with pid 79898 00:17:28.115 Received shutdown signal, test time was about 2.000000 seconds 00:17:28.115 00:17:28.115 Latency(us) 00:17:28.115 [2024-11-05T09:40:14.073Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.115 [2024-11-05T09:40:14.073Z] =================================================================================================================== 00:17:28.115 [2024-11-05T09:40:14.073Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79898' 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79898 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79898 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 79724 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 79724 ']' 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 79724 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:17:28.115 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:28.116 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79724 00:17:28.116 killing process with pid 79724 00:17:28.116 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:28.116 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:28.116 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79724' 00:17:28.116 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 79724 00:17:28.116 09:40:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 79724 00:17:28.374 00:17:28.374 real 0m15.379s 00:17:28.374 user 0m30.589s 00:17:28.374 sys 0m4.308s 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 ************************************ 00:17:28.374 END TEST nvmf_digest_clean 00:17:28.374 ************************************ 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 ************************************ 00:17:28.374 START TEST nvmf_digest_error 00:17:28.374 ************************************ 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.374 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=79978 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 79978 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79978 ']' 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:28.375 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.375 [2024-11-05 09:40:14.226140] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:28.375 [2024-11-05 09:40:14.226235] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:28.633 [2024-11-05 09:40:14.376306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.633 [2024-11-05 09:40:14.407168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:28.633 [2024-11-05 09:40:14.407230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:28.633 [2024-11-05 09:40:14.407248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:28.633 [2024-11-05 09:40:14.407261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:28.633 [2024-11-05 09:40:14.407272] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:28.633 [2024-11-05 09:40:14.407643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.633 [2024-11-05 09:40:14.488203] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.633 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.634 [2024-11-05 09:40:14.524572] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:28.634 null0 00:17:28.634 [2024-11-05 09:40:14.559484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:28.634 [2024-11-05 09:40:14.583613] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=79997 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 79997 /var/tmp/bperf.sock 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 79997 ']' 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:28.634 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:28.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:28.892 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:28.892 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:28.892 [2024-11-05 09:40:14.647823] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:28.892 [2024-11-05 09:40:14.648149] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79997 ] 00:17:28.892 [2024-11-05 09:40:14.797821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.892 [2024-11-05 09:40:14.831508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.150 [2024-11-05 09:40:14.862458] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:29.150 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:29.150 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:29.150 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:29.150 09:40:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:29.409 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:29.409 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.409 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:29.409 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.409 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.409 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:29.668 nvme0n1 00:17:29.926 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:29.926 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.926 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:29.927 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.927 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:29.927 09:40:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:29.927 Running I/O for 2 seconds... 00:17:29.927 [2024-11-05 09:40:15.828449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:29.927 [2024-11-05 09:40:15.828706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.927 [2024-11-05 09:40:15.828727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.927 [2024-11-05 09:40:15.846713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:29.927 [2024-11-05 09:40:15.846926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.927 [2024-11-05 09:40:15.846948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.927 [2024-11-05 09:40:15.865016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:29.927 [2024-11-05 09:40:15.865062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.927 [2024-11-05 09:40:15.865078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:29.927 [2024-11-05 09:40:15.882908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:29.927 [2024-11-05 09:40:15.882978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23231 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:29.927 [2024-11-05 09:40:15.882995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:15.900816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:15.901035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:15.901056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:15.918995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:15.919040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:15.919055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:15.937865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:15.937924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:15.937941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:15.956678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:15.956860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:15.956881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:15.974770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:15.974817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:15.974832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:15.993368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:15.993414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:15.993429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.011573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.011618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.011633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.030543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.030587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:5212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.030602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.048792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.048885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.048904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.067002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.067045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.067059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.084960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.085014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:9048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.085029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.104245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.104290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.104342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.124004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.124253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18052 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.124272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.186 [2024-11-05 09:40:16.142929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.186 [2024-11-05 09:40:16.142972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.186 [2024-11-05 09:40:16.143004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.162092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.162317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.162336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.181361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.181437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.181452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.199462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.199509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.199524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.217473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.217659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.217678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.235585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.235632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.235648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.253534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.253581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.253597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.271438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.271483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.271498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.289563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.289611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.289642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.307809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.308035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.308054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.326352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.326395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.326424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.344883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.344947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.344973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.364026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.364074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.364090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.382203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.382251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.382266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.446 [2024-11-05 09:40:16.400125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.446 [2024-11-05 09:40:16.400188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.446 [2024-11-05 09:40:16.400204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.705 [2024-11-05 09:40:16.418025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.705 [2024-11-05 09:40:16.418080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.705 [2024-11-05 09:40:16.418096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.705 [2024-11-05 09:40:16.435880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.705 [2024-11-05 09:40:16.435929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.705 [2024-11-05 09:40:16.435943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.705 [2024-11-05 09:40:16.453627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.705 [2024-11-05 09:40:16.453676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.453691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.471502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.471697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.471717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.489475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.489520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.489535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.507208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.507271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.507287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.525053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.525226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:22634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.525245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.542981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.543023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.543037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.560696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.560877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.560896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.578615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.578659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.578674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.596371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.596535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:10929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.596553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.614258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.614300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.614314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.632005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.632045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.632059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.706 [2024-11-05 09:40:16.650008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.706 [2024-11-05 09:40:16.650047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:1744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.706 [2024-11-05 09:40:16.650061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.965 [2024-11-05 09:40:16.667890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.965 [2024-11-05 09:40:16.667935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.965 [2024-11-05 09:40:16.667949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.965 [2024-11-05 09:40:16.686170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.965 [2024-11-05 09:40:16.686227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.965 [2024-11-05 09:40:16.686243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.965 [2024-11-05 09:40:16.704378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.965 [2024-11-05 09:40:16.704434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.965 [2024-11-05 09:40:16.704450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.965 [2024-11-05 09:40:16.723146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.965 [2024-11-05 09:40:16.723351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.723372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.741194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.741365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.741384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.759917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.759961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.759976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.778568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.778615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.778645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.797096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.797143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.797158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 13789.00 IOPS, 53.86 MiB/s [2024-11-05T09:40:16.924Z] [2024-11-05 09:40:16.815542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.815589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.815604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.833211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.833382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.833401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.851570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.851866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:14763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.851888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.869898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.870164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.870186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.888431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.888505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.888521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:30.966 [2024-11-05 09:40:16.906623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:30.966 [2024-11-05 09:40:16.906693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:30.966 [2024-11-05 09:40:16.906709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.225 [2024-11-05 09:40:16.924424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.225 [2024-11-05 09:40:16.924508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:16.924531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:16.942637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:16.942709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:16.942725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:16.960455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:16.960502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:16.960517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:16.985917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:16.985967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:16.985982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.003671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.003723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.003738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.021401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.021590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.021609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.039392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.039438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.039453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.057162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.057207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.057221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.075009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.075188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.075207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.093024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.093069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.093083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.110851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.110896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:12650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.110910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.128591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.128636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:20406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.128650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.146327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.146370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.146384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.164083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.164128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.164142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.226 [2024-11-05 09:40:17.181791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.226 [2024-11-05 09:40:17.181847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.226 [2024-11-05 09:40:17.181863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.485 [2024-11-05 09:40:17.199594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.485 [2024-11-05 09:40:17.199639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22124 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.485 [2024-11-05 09:40:17.199653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.485 [2024-11-05 09:40:17.217396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.485 [2024-11-05 09:40:17.217443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:12775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.485 [2024-11-05 09:40:17.217457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.485 [2024-11-05 09:40:17.235160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.485 [2024-11-05 09:40:17.235225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.485 [2024-11-05 09:40:17.235240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.485 [2024-11-05 09:40:17.253115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.485 [2024-11-05 09:40:17.253159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.485 [2024-11-05 09:40:17.253172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.485 [2024-11-05 09:40:17.270946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.485 [2024-11-05 09:40:17.270993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.271010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.288653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.288697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.288711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.306448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.306494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.306509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.324272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.324320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.324335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.342014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.342057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:534 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.342071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.359802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.359863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.359879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.377663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.377710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:11093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.377724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.395441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.395506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.395529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.413311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.413358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.413372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.486 [2024-11-05 09:40:17.431062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.486 [2024-11-05 09:40:17.431113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.486 [2024-11-05 09:40:17.431126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.448917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.745 [2024-11-05 09:40:17.448972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.745 [2024-11-05 09:40:17.448987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.466708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.745 [2024-11-05 09:40:17.466760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.745 [2024-11-05 09:40:17.466774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.484501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.745 [2024-11-05 09:40:17.484551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:25248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.745 [2024-11-05 09:40:17.484566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.502339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.745 [2024-11-05 09:40:17.502389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.745 [2024-11-05 09:40:17.502404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.520112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.745 [2024-11-05 09:40:17.520162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:1340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.745 [2024-11-05 09:40:17.520177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.538601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.745 [2024-11-05 09:40:17.538661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.745 [2024-11-05 09:40:17.538677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.556576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.745 [2024-11-05 09:40:17.556629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.745 [2024-11-05 09:40:17.556646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.745 [2024-11-05 09:40:17.574447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.574494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.574508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.746 [2024-11-05 09:40:17.592271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.592319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.592333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.746 [2024-11-05 09:40:17.610079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.610126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.610140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.746 [2024-11-05 09:40:17.627897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.627944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.627958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.746 [2024-11-05 09:40:17.645625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.645674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.645689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.746 [2024-11-05 09:40:17.663544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.663595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.663609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.746 [2024-11-05 09:40:17.681455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.681520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.681538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:31.746 [2024-11-05 09:40:17.700412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:31.746 [2024-11-05 09:40:17.700476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:31.746 [2024-11-05 09:40:17.700491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.005 [2024-11-05 09:40:17.718418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:32.005 [2024-11-05 09:40:17.718487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.005 [2024-11-05 09:40:17.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.005 [2024-11-05 09:40:17.736364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:32.005 [2024-11-05 09:40:17.736435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.005 [2024-11-05 09:40:17.736449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.005 [2024-11-05 09:40:17.754216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:32.005 [2024-11-05 09:40:17.754266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.005 [2024-11-05 09:40:17.754281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.005 [2024-11-05 09:40:17.772076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:32.005 [2024-11-05 09:40:17.772129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:15168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.005 [2024-11-05 09:40:17.772144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.006 [2024-11-05 09:40:17.789876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:32.006 [2024-11-05 09:40:17.789928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.006 [2024-11-05 09:40:17.789943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.006 13979.00 IOPS, 54.61 MiB/s [2024-11-05T09:40:17.964Z] [2024-11-05 09:40:17.808945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x17cd370) 00:17:32.006 [2024-11-05 09:40:17.809006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:23835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:32.006 [2024-11-05 09:40:17.809021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:32.006 00:17:32.006 Latency(us) 00:17:32.006 [2024-11-05T09:40:17.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.006 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:32.006 nvme0n1 : 2.01 13969.35 54.57 0.00 0.00 9155.63 8579.26 34317.03 00:17:32.006 [2024-11-05T09:40:17.964Z] =================================================================================================================== 00:17:32.006 [2024-11-05T09:40:17.964Z] Total : 13969.35 54.57 0.00 0.00 9155.63 8579.26 34317.03 00:17:32.006 { 00:17:32.006 "results": [ 00:17:32.006 { 00:17:32.006 "job": "nvme0n1", 00:17:32.006 "core_mask": "0x2", 00:17:32.006 "workload": "randread", 00:17:32.006 "status": "finished", 00:17:32.006 "queue_depth": 128, 00:17:32.006 "io_size": 4096, 00:17:32.006 "runtime": 2.010544, 00:17:32.006 "iops": 13969.35356798956, 00:17:32.006 "mibps": 54.56778737495922, 00:17:32.006 "io_failed": 0, 00:17:32.006 "io_timeout": 0, 00:17:32.006 "avg_latency_us": 9155.634559308099, 00:17:32.006 "min_latency_us": 8579.258181818182, 00:17:32.006 "max_latency_us": 34317.03272727273 00:17:32.006 } 00:17:32.006 ], 00:17:32.006 "core_count": 1 00:17:32.006 } 00:17:32.006 09:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:32.006 09:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:32.006 09:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:32.006 09:40:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:32.006 | .driver_specific 00:17:32.006 | .nvme_error 00:17:32.006 | .status_code 00:17:32.006 | .command_transient_transport_error' 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 79997 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79997 ']' 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79997 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79997 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:32.265 killing process with pid 79997 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79997' 00:17:32.265 Received shutdown signal, test time was about 2.000000 seconds 00:17:32.265 00:17:32.265 Latency(us) 00:17:32.265 [2024-11-05T09:40:18.223Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:32.265 [2024-11-05T09:40:18.223Z] =================================================================================================================== 00:17:32.265 [2024-11-05T09:40:18.223Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79997 00:17:32.265 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79997 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80050 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80050 /var/tmp/bperf.sock 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80050 ']' 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:32.524 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:32.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:32.525 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:32.525 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:32.525 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:32.525 [2024-11-05 09:40:18.362821] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:32.525 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:32.525 Zero copy mechanism will not be used. 00:17:32.525 [2024-11-05 09:40:18.362957] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80050 ] 00:17:32.783 [2024-11-05 09:40:18.520371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:32.783 [2024-11-05 09:40:18.563231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.783 [2024-11-05 09:40:18.600256] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:32.783 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:32.783 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:32.783 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:32.783 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:33.042 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:33.042 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.042 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:33.042 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.042 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.042 09:40:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:33.611 nvme0n1 00:17:33.611 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:33.611 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.611 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:33.611 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.611 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:33.611 09:40:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:33.611 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:33.611 Zero copy mechanism will not be used. 00:17:33.611 Running I/O for 2 seconds... 00:17:33.611 [2024-11-05 09:40:19.407550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.611 [2024-11-05 09:40:19.407605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.611 [2024-11-05 09:40:19.407622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.611 [2024-11-05 09:40:19.412139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.611 [2024-11-05 09:40:19.412180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.611 [2024-11-05 09:40:19.412195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.611 [2024-11-05 09:40:19.416593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.611 [2024-11-05 09:40:19.416632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.611 [2024-11-05 09:40:19.416646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.611 [2024-11-05 09:40:19.421207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.611 [2024-11-05 09:40:19.421247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.611 [2024-11-05 09:40:19.421262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.611 [2024-11-05 09:40:19.425817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.611 [2024-11-05 09:40:19.425870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.611 [2024-11-05 09:40:19.425884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.611 [2024-11-05 09:40:19.430273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.611 [2024-11-05 09:40:19.430311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.611 [2024-11-05 09:40:19.430325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.611 [2024-11-05 09:40:19.434748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.611 [2024-11-05 09:40:19.434787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.611 [2024-11-05 09:40:19.434801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.611 [2024-11-05 09:40:19.439187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.439226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.439240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.443674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.443712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.443724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.448166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.448205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.448218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.452614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.452652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.452665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.457152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.457189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.457202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.461614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.461652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.461666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.466126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.466164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.466176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.470659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.470698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.470712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.475181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.475218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.475232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.479738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.479776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.479789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.484237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.484275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.484288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.488810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.488861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.488875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.493308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.493346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.493360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.497778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.497815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.497829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.502232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.502271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.502284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.506724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.506763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.506776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.511213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.511249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.511262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.515704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.515741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.515754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.520218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.520256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.520270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.524737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.524776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.524790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.529210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.529248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.529261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.533743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.533780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.533793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.538287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.538323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.538336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.542789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.542828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.542858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.547274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.547313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.547326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.551729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.551766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.551778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.556326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.612 [2024-11-05 09:40:19.556365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.612 [2024-11-05 09:40:19.556379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.612 [2024-11-05 09:40:19.560930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.613 [2024-11-05 09:40:19.560974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.613 [2024-11-05 09:40:19.560988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.613 [2024-11-05 09:40:19.565535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.613 [2024-11-05 09:40:19.565573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.613 [2024-11-05 09:40:19.565586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.570032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.570070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.570083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.574569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.574609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.574622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.579018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.579056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.579070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.583587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.583623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.583637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.588087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.588124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.588138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.592518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.592556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.592570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.597193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.597236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.597250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.601826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.601889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.601910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.606478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.606520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.606534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.611155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.611193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.611207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.615809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.615867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.615881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.620359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.620399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.620414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.624805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.624859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.624873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.629351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.629392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.629406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.633810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.633866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.633880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.638366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.638404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.638417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.643000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.643039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.643052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.647504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.647544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.647558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.652074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.652112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.652125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.656668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.656720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.661160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.661198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.661212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.665666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.665704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.665717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.670161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.670196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.670209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.674670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.873 [2024-11-05 09:40:19.674707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.873 [2024-11-05 09:40:19.674720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.873 [2024-11-05 09:40:19.679121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.679159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.679172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.683639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.683674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.683687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.688194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.688232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.688245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.692675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.692713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.692726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.697218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.697257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.697270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.701736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.701775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.701788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.706182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.706219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.706233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.710669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.710708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.710722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.715222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.715260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.715273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.719801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.719852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.719868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.724266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.724305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.724319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.728807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.728860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.728876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.733313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.733352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.733365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.738791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.738852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.738870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.743392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.743435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.743449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.748006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.748046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.748060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.752737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.752776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.752789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.757438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.757504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.757517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.762134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.762174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.762188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.766764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.766801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.766814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.771452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.771487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.771500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.776145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.776185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.776198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.780957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.781002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.781016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.785632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.785684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.785712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.790322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.790355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.790367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.794977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.795012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.795025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.799608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.799642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.799653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.804313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.804349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.804362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.809014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.809051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.809064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.813726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.813777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.813790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.818573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.818627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.818640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.823357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.823428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.823441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:33.874 [2024-11-05 09:40:19.828192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:33.874 [2024-11-05 09:40:19.828259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:33.874 [2024-11-05 09:40:19.828271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.832977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.833013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.833026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.837573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.837624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.837637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.842413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.842457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.842471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.847753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.847811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.847827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.852601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.852639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.852651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.857402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.857501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.857514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.862184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.862225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.862239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.866907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.866961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.866976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.871790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.871876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.871890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.876415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.876454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.876469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.881170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.881223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.881237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.885875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.885916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.885930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.890344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.890384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.890397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.894990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.895033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.895047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.899610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.899652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.899667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.904186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.904227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.904240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.908742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.908782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.908796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.913335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.913374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.913387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.917892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.134 [2024-11-05 09:40:19.917930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.134 [2024-11-05 09:40:19.917943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.134 [2024-11-05 09:40:19.922395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.922433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.922446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.926887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.926923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.926936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.931337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.931374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.931388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.935824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.935874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.935888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.940382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.940421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.940434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.944900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.944938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.944952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.949440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.949490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.953919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.953955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.953968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.958369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.958407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.958421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.962862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.962900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.962913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.967359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.967399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.967413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.971807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.971858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.971872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.976238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.976274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.976288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.980798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.980850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.980865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.985345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.985382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.985395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.989881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.989917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.989931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.994402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.994439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.994451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:19.998937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:19.998973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:19.998986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.003413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.003449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.003461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.007902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.007938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.007951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.012401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.012438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.012451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.016873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.016908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.016922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.021421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.021458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.021472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.025937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.025972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.025986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.030411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.030449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.030463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.034936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.034972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.034985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.039425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.135 [2024-11-05 09:40:20.039461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.135 [2024-11-05 09:40:20.039474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.135 [2024-11-05 09:40:20.043984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.044023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.044038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.048464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.048500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.048513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.052947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.052989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.053003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.057448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.057484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.057497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.062520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.062563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.062577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.067074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.067117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.067138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.071587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.071627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.071641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.076085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.076122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.076136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.080661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.080702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.080716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.085200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.085237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.085250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.136 [2024-11-05 09:40:20.089721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.136 [2024-11-05 09:40:20.089758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.136 [2024-11-05 09:40:20.089771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.094267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.094305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.094318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.098722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.098760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.098773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.103114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.103150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.103163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.107616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.107657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.107671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.112160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.112196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.112210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.116638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.116675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.116689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.121232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.121268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.121281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.125734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.125772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.130311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.130348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.130361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.134828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.134879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.134893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.139318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.139357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.139371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.143763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.143802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.143816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.148353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.148390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.148403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.153014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.153053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.153066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.157511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.157551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.157565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.161995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.162034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.397 [2024-11-05 09:40:20.162048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.397 [2024-11-05 09:40:20.166560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.397 [2024-11-05 09:40:20.166599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.166612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.171068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.171105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.171118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.175557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.175593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.175607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.180094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.180131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.180144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.184647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.184683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.184695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.189390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.189428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.189440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.194043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.194079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.194093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.198634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.198686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.198699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.203351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.203386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.203399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.207958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.208009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.208022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.212563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.212615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.212635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.217208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.217244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.217257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.221711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.221748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.221761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.227081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.227146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.227168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.231875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.231944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.231959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.236515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.236556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.236570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.241152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.241188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.241202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.245696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.245748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.245760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.251129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.251190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.251205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.255858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.255923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.255937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.260663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.260698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.260710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.265339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.265376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.265389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.270179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.270214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.270228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.275330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.275371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.275385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.279960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.279998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.280012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.284618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.284670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.284684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.289414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.289465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.289510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.398 [2024-11-05 09:40:20.294101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.398 [2024-11-05 09:40:20.294137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.398 [2024-11-05 09:40:20.294150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.298807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.298891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.298906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.303733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.303807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.303822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.308536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.308590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.308619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.313425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.313521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.313534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.318054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.318107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.318121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.322573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.322624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.322637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.327047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.327081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.327093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.331321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.331371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.331383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.335813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.335875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.335888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.340363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.340414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.340426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.344642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.344691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.344703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.349049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.349087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.349101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.399 [2024-11-05 09:40:20.353739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.399 [2024-11-05 09:40:20.353789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.399 [2024-11-05 09:40:20.353801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.358240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.358305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.358317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.362864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.362927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.362940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.367344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.367395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.367408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.371759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.371810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.371822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.376318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.376352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.376381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.381031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.381072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.381086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.385612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.385677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.385690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.390302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.390350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.390363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.394744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.394798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.394810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.399213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.399250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.399263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.660 6743.00 IOPS, 842.88 MiB/s [2024-11-05T09:40:20.618Z] [2024-11-05 09:40:20.405523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.405574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.405589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.410100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.660 [2024-11-05 09:40:20.410148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.660 [2024-11-05 09:40:20.410162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.660 [2024-11-05 09:40:20.414657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.414721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.414735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.419212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.419260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.419273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.423728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.423781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.423796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.428219] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.428257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.428270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.432757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.432820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.432834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.437348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.437395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.437409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.441919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.441963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.441977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.446437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.446491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.446505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.451050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.451086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.451098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.455631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.455682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.455696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.460301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.460353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.460366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.464938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.464981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.464995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.469486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.469538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.469551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.473999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.474035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.474048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.478455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.478508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.478522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.483038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.483090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.483103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.487620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.487670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.487684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.492227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.492279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.492292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.496790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.496843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.496868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.501263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.501299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.501312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.505750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.505802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.505815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.510238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.510275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.510288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.514758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.514809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.514823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.519226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.519278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.519291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.523697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.523749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.523763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.528165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.528217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.528231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.532669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.661 [2024-11-05 09:40:20.532704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.661 [2024-11-05 09:40:20.532718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.661 [2024-11-05 09:40:20.537129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.537165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.537178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.541583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.541636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.541649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.546042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.546091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.546104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.550656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.550707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.550721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.555223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.555259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.555272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.559742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.559779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.559792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.564293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.564331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.564344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.568820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.568870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.568884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.573338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.573375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.573388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.577803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.577853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.577868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.582333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.582370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.582383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.586687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.586725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.586738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.591230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.591272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.591286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.595740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.595794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.595807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.601594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.601669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.601692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.606936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.606977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.606992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.611511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.611564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.611578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.662 [2024-11-05 09:40:20.616206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.662 [2024-11-05 09:40:20.616245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.662 [2024-11-05 09:40:20.616258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.620768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.620807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.620821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.625303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.625339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.625352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.629761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.629800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.629812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.634224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.634260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.634273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.638647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.638685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.638698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.643185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.643224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.643237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.647655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.647693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.647706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.652096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.652133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.652146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.656534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.656570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.656583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.661116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.661154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.661168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.665605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.665642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.665655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.670179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.670218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.670231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.674620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.674659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.674672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.679093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.679130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.679142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.683515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.683552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.683565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.687972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.688010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.688023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.692404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.692440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.692453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.696801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.696869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.696884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.701271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.701307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.701335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.705717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.705753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.705765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.710139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.710177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.710190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.714593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.714628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.714640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.719101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.719139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.719152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.723619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.723655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.723668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.728127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.728166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.728179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.732611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.923 [2024-11-05 09:40:20.732648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.923 [2024-11-05 09:40:20.732662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.923 [2024-11-05 09:40:20.737061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.737099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.737112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.741572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.741607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.741619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.746110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.746148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.746161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.750552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.750590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.750603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.755087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.755125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.755138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.760661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.760709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.760723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.765192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.765235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.765249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.769747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.769785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.769799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.774321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.774362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.774377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.778916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.778955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.778968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.783410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.783447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.783461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.787971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.788011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.788024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.792580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.792619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.792632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.797206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.797244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.797258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.801673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.801712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.801724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.806230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.806285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.806300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.810713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.810749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.810762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.815187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.815224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.815237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.819672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.819708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.819721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.824167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.824205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.824218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.828620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.828656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.828668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.833108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.833144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.833158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.837632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.837668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.837680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.842114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.842153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.842167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.846626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.846662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.846675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.851078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.851115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.851129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.855667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.924 [2024-11-05 09:40:20.855702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.924 [2024-11-05 09:40:20.855715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.924 [2024-11-05 09:40:20.860216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.925 [2024-11-05 09:40:20.860254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.925 [2024-11-05 09:40:20.860267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:34.925 [2024-11-05 09:40:20.864679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.925 [2024-11-05 09:40:20.864715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.925 [2024-11-05 09:40:20.864745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:34.925 [2024-11-05 09:40:20.869202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.925 [2024-11-05 09:40:20.869239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.925 [2024-11-05 09:40:20.869253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:34.925 [2024-11-05 09:40:20.873770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.925 [2024-11-05 09:40:20.873807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.925 [2024-11-05 09:40:20.873820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:34.925 [2024-11-05 09:40:20.878345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:34.925 [2024-11-05 09:40:20.878385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:34.925 [2024-11-05 09:40:20.878398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.882753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.882790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.882804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.887230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.887269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.887282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.891703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.891740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.891753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.896270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.896309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.896322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.900760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.900796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.900809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.905294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.905347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.905359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.909771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.909809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.909821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.914319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.914357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.914370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.918929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.918964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.918977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.923471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.923510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.923523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.927953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.927988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.928001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.932477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.932514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.932528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.936989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.937025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.937037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.941465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.941503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.941517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.945937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.945975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.945988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.950379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.950418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.185 [2024-11-05 09:40:20.950432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.185 [2024-11-05 09:40:20.954810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.185 [2024-11-05 09:40:20.954863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.954876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.959300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.959337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.959350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.963743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.963782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.963795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.968212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.968250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.968263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.972694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.972732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.972746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.977191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.977228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.977243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.981668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.981708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.981722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.986136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.986172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.986185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.990630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.990668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.990681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.995133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.995169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.995182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:20.999583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:20.999619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:20.999634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.003989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.004025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.004038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.008437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.008473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.008487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.012940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.012985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.012999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.017423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.017458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.017471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.021874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.021912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.021926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.026314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.026353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.026366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.030774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.030812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.030826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.035237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.035275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.035288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.039692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.039728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.039742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.044127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.044165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.044178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.048549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.048586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.048599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.053018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.053055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.053068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.057476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.057512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.057525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.062146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.062191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.062205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.066620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.066660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.066674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.071072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.186 [2024-11-05 09:40:21.071112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.186 [2024-11-05 09:40:21.071126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.186 [2024-11-05 09:40:21.075499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.075538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.075551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.080023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.080059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.080072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.084454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.084491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.084505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.088982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.089018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.089031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.093434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.093474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.093487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.097922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.097959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.097972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.102394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.102432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.102446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.106885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.106922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.106936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.111317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.111353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.111366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.115803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.115849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.115864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.120261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.120299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.120312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.124740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.124777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.124790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.129186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.129223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.129237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.133675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.133712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.133725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.138172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.138211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.138225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.187 [2024-11-05 09:40:21.142656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.187 [2024-11-05 09:40:21.142696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.187 [2024-11-05 09:40:21.142709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.147118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.147156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.147169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.151609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.151648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.151661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.156087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.156125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.156138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.160552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.160590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.160604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.165002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.165039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.165052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.169425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.169463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.169477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.173937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.173974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.173987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.178460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.178498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.178512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.182917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.182955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.182969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.187307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.187344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.187357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.191741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.191780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.191794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.196237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.196275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.196288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.200784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.200823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.447 [2024-11-05 09:40:21.200853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.447 [2024-11-05 09:40:21.205306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.447 [2024-11-05 09:40:21.205345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.205358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.209806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.209859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.209874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.214264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.214302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.214315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.218733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.218770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.218784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.223171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.223208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.223222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.227581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.227620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.227634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.232059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.232095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.232108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.236471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.236509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.236523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.240993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.241029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.241042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.245417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.245454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.245467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.249980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.250016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.250029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.254504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.254543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.254557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.258996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.259041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.259055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.263470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.263508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.263522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.267956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.267993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.268007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.272417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.272455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.272469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.276949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.276994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.277008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.281416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.281454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.281467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.285941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.285978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.285991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.290409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.290447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.290461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.294899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.294946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.294959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.299399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.299443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.299456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.303825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.303873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.303887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.308332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.308368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.308380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.312821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.312874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.312888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.317375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.317411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.317424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.321893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.448 [2024-11-05 09:40:21.321940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.448 [2024-11-05 09:40:21.321953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.448 [2024-11-05 09:40:21.326386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.326437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.326450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.330933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.330979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.330992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.335413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.335449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.335463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.339897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.339932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.339945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.344304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.344340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.344353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.348736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.348772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.348785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.353244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.353280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.353293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.357733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.357770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.357783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.362214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.362250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.362263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.366659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.366695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.366708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.371114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.371150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.371163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.375595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.375632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.375645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.380107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.380143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.380156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.384564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.384600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.384613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.389041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.389077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.389090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.393455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.393491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.393503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:35.449 [2024-11-05 09:40:21.398059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.398096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.398109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:35.449 6797.00 IOPS, 849.62 MiB/s [2024-11-05T09:40:21.407Z] [2024-11-05 09:40:21.404250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x203e400) 00:17:35.449 [2024-11-05 09:40:21.404287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:35.449 [2024-11-05 09:40:21.404300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:35.708 00:17:35.708 Latency(us) 00:17:35.708 [2024-11-05T09:40:21.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.708 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:35.708 nvme0n1 : 2.00 6798.57 849.82 0.00 0.00 2349.67 1936.29 6315.29 00:17:35.708 [2024-11-05T09:40:21.666Z] =================================================================================================================== 00:17:35.708 [2024-11-05T09:40:21.666Z] Total : 6798.57 849.82 0.00 0.00 2349.67 1936.29 6315.29 00:17:35.708 { 00:17:35.708 "results": [ 00:17:35.708 { 00:17:35.708 "job": "nvme0n1", 00:17:35.708 "core_mask": "0x2", 00:17:35.708 "workload": "randread", 00:17:35.708 "status": "finished", 00:17:35.708 "queue_depth": 16, 00:17:35.708 "io_size": 131072, 00:17:35.708 "runtime": 2.004098, 00:17:35.708 "iops": 6798.569730621955, 00:17:35.708 "mibps": 849.8212163277444, 00:17:35.708 "io_failed": 0, 00:17:35.708 "io_timeout": 0, 00:17:35.708 "avg_latency_us": 2349.6683844537115, 00:17:35.708 "min_latency_us": 1936.290909090909, 00:17:35.708 "max_latency_us": 6315.2872727272725 00:17:35.708 } 00:17:35.708 ], 00:17:35.708 "core_count": 1 00:17:35.708 } 00:17:35.708 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:35.708 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:35.709 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:35.709 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:35.709 | .driver_specific 00:17:35.709 | .nvme_error 00:17:35.709 | .status_code 00:17:35.709 | .command_transient_transport_error' 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 439 > 0 )) 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80050 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80050 ']' 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80050 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80050 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:35.967 killing process with pid 80050 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80050' 00:17:35.967 Received shutdown signal, test time was about 2.000000 seconds 00:17:35.967 00:17:35.967 Latency(us) 00:17:35.967 [2024-11-05T09:40:21.925Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:35.967 [2024-11-05T09:40:21.925Z] =================================================================================================================== 00:17:35.967 [2024-11-05T09:40:21.925Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80050 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80050 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:17:35.967 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80098 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80098 /var/tmp/bperf.sock 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80098 ']' 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:35.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:35.968 09:40:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.226 [2024-11-05 09:40:21.937040] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:36.226 [2024-11-05 09:40:21.937119] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80098 ] 00:17:36.226 [2024-11-05 09:40:22.083490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.226 [2024-11-05 09:40:22.116350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:36.226 [2024-11-05 09:40:22.147268] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:36.484 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:36.484 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:36.484 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.484 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:36.742 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:36.743 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.743 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:36.743 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.743 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:36.743 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:37.001 nvme0n1 00:17:37.001 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:37.001 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.001 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:37.001 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.001 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:37.001 09:40:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:37.001 Running I/O for 2 seconds... 00:17:37.313 [2024-11-05 09:40:22.960724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fef90 00:17:37.313 [2024-11-05 09:40:22.963371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:22.963413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:22.977670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166feb58 00:17:37.313 [2024-11-05 09:40:22.980298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:22.980350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:22.994384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fe2e8 00:17:37.313 [2024-11-05 09:40:22.997010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:22.997045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.011168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fda78 00:17:37.313 [2024-11-05 09:40:23.013740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:10231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.013788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.027978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fd208 00:17:37.313 [2024-11-05 09:40:23.030495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.030545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.044688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fc998 00:17:37.313 [2024-11-05 09:40:23.047280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.047329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.061582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fc128 00:17:37.313 [2024-11-05 09:40:23.064107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.064157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.078447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fb8b8 00:17:37.313 [2024-11-05 09:40:23.080950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.080999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.095224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fb048 00:17:37.313 [2024-11-05 09:40:23.097707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:15991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.097757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.111921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166fa7d8 00:17:37.313 [2024-11-05 09:40:23.114389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.114439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.128889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f9f68 00:17:37.313 [2024-11-05 09:40:23.131315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.131350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.145666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f96f8 00:17:37.313 [2024-11-05 09:40:23.148091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.148140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.162526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f8e88 00:17:37.313 [2024-11-05 09:40:23.164934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:25318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.164991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.179380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f8618 00:17:37.313 [2024-11-05 09:40:23.181723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.181756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.196170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f7da8 00:17:37.313 [2024-11-05 09:40:23.198496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.198527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.213046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f7538 00:17:37.313 [2024-11-05 09:40:23.215344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.313 [2024-11-05 09:40:23.215375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:37.313 [2024-11-05 09:40:23.229875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f6cc8 00:17:37.581 [2024-11-05 09:40:23.232171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.581 [2024-11-05 09:40:23.232208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:37.581 [2024-11-05 09:40:23.246699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f6458 00:17:37.581 [2024-11-05 09:40:23.248986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:9857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.581 [2024-11-05 09:40:23.249020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:37.581 [2024-11-05 09:40:23.263474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f5be8 00:17:37.582 [2024-11-05 09:40:23.265730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.265761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.280285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f5378 00:17:37.582 [2024-11-05 09:40:23.282518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:2888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.282548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.297071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f4b08 00:17:37.582 [2024-11-05 09:40:23.299271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:17266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.299302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.313824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f4298 00:17:37.582 [2024-11-05 09:40:23.316014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.316045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.330653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f3a28 00:17:37.582 [2024-11-05 09:40:23.332853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.332906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.347453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f31b8 00:17:37.582 [2024-11-05 09:40:23.349605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.349636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.364304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f2948 00:17:37.582 [2024-11-05 09:40:23.366478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.366525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.381320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f20d8 00:17:37.582 [2024-11-05 09:40:23.383439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:14181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.383469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.398170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f1868 00:17:37.582 [2024-11-05 09:40:23.400257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:3662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.400288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.414962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f0ff8 00:17:37.582 [2024-11-05 09:40:23.417027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.417059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.431739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f0788 00:17:37.582 [2024-11-05 09:40:23.433809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.433851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.449204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eff18 00:17:37.582 [2024-11-05 09:40:23.451233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.451273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.466112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ef6a8 00:17:37.582 [2024-11-05 09:40:23.468128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.468166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.483021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eee38 00:17:37.582 [2024-11-05 09:40:23.485017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.485053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.499901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ee5c8 00:17:37.582 [2024-11-05 09:40:23.501869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.501903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.516661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166edd58 00:17:37.582 [2024-11-05 09:40:23.518618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:11710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.518649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:37.582 [2024-11-05 09:40:23.533499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ed4e8 00:17:37.582 [2024-11-05 09:40:23.535423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.582 [2024-11-05 09:40:23.535453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.550314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ecc78 00:17:37.841 [2024-11-05 09:40:23.552215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:17686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.552246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.567126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ec408 00:17:37.841 [2024-11-05 09:40:23.569023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.569054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.584246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ebb98 00:17:37.841 [2024-11-05 09:40:23.586128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.586168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.601120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eb328 00:17:37.841 [2024-11-05 09:40:23.602963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.602999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.617937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eaab8 00:17:37.841 [2024-11-05 09:40:23.619739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.619770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.634703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ea248 00:17:37.841 [2024-11-05 09:40:23.636553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.636600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.651555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e99d8 00:17:37.841 [2024-11-05 09:40:23.653409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:18246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.653456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.668390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e9168 00:17:37.841 [2024-11-05 09:40:23.670205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.670267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.685185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e88f8 00:17:37.841 [2024-11-05 09:40:23.686945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.686979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.701883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e8088 00:17:37.841 [2024-11-05 09:40:23.703606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.703655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.718594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e7818 00:17:37.841 [2024-11-05 09:40:23.720344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.720389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.735493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e6fa8 00:17:37.841 [2024-11-05 09:40:23.737206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.737238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.752290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e6738 00:17:37.841 [2024-11-05 09:40:23.753982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.754017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.769158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e5ec8 00:17:37.841 [2024-11-05 09:40:23.770787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.770820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:37.841 [2024-11-05 09:40:23.785896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e5658 00:17:37.841 [2024-11-05 09:40:23.787537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:37.841 [2024-11-05 09:40:23.787585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.802747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e4de8 00:17:38.101 [2024-11-05 09:40:23.804346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:15392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.804379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.819522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e4578 00:17:38.101 [2024-11-05 09:40:23.821109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:9538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.821141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.837121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e3d08 00:17:38.101 [2024-11-05 09:40:23.838680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.838733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.854115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e3498 00:17:38.101 [2024-11-05 09:40:23.855660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.855709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.870809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e2c28 00:17:38.101 [2024-11-05 09:40:23.872339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.872388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.887546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e23b8 00:17:38.101 [2024-11-05 09:40:23.889051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.889082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.904373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e1b48 00:17:38.101 [2024-11-05 09:40:23.905895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.905921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.921223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e12d8 00:17:38.101 [2024-11-05 09:40:23.922677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:17880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.922725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.937875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e0a68 00:17:38.101 14929.00 IOPS, 58.32 MiB/s [2024-11-05T09:40:24.059Z] [2024-11-05 09:40:23.939318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.939350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.954673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e01f8 00:17:38.101 [2024-11-05 09:40:23.956112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:1615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.956147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.971515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166df988 00:17:38.101 [2024-11-05 09:40:23.972928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.972960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:23.988367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166df118 00:17:38.101 [2024-11-05 09:40:23.989767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:12595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:23.989815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:24.005199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166de8a8 00:17:38.101 [2024-11-05 09:40:24.006566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.101 [2024-11-05 09:40:24.006613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:38.101 [2024-11-05 09:40:24.021929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166de038 00:17:38.101 [2024-11-05 09:40:24.023270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.102 [2024-11-05 09:40:24.023318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:38.102 [2024-11-05 09:40:24.045659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166de038 00:17:38.102 [2024-11-05 09:40:24.048292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.102 [2024-11-05 09:40:24.048338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:38.361 [2024-11-05 09:40:24.062620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166de8a8 00:17:38.361 [2024-11-05 09:40:24.065224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.361 [2024-11-05 09:40:24.065256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:38.361 [2024-11-05 09:40:24.079399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166df118 00:17:38.361 [2024-11-05 09:40:24.082018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.361 [2024-11-05 09:40:24.082050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:38.361 [2024-11-05 09:40:24.096055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166df988 00:17:38.361 [2024-11-05 09:40:24.098622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.098668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.112767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e01f8 00:17:38.362 [2024-11-05 09:40:24.115320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.115369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.129456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e0a68 00:17:38.362 [2024-11-05 09:40:24.131984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:10076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.132031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.146223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e12d8 00:17:38.362 [2024-11-05 09:40:24.148729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.148776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.162942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e1b48 00:17:38.362 [2024-11-05 09:40:24.165416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.165464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.179726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e23b8 00:17:38.362 [2024-11-05 09:40:24.182194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.182226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.196350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e2c28 00:17:38.362 [2024-11-05 09:40:24.198784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.198814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.213183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e3498 00:17:38.362 [2024-11-05 09:40:24.215584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:7412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.215614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.230050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e3d08 00:17:38.362 [2024-11-05 09:40:24.232436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.232482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.246982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e4578 00:17:38.362 [2024-11-05 09:40:24.249360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.249407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.263898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e4de8 00:17:38.362 [2024-11-05 09:40:24.266282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.266313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.280782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e5658 00:17:38.362 [2024-11-05 09:40:24.283138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.283171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.297570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e5ec8 00:17:38.362 [2024-11-05 09:40:24.299899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.299945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:38.362 [2024-11-05 09:40:24.314485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e6738 00:17:38.362 [2024-11-05 09:40:24.316801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.362 [2024-11-05 09:40:24.316833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.331369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e6fa8 00:17:38.621 [2024-11-05 09:40:24.333653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:21880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.333700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.348190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e7818 00:17:38.621 [2024-11-05 09:40:24.350480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:9322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.350525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.365134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e8088 00:17:38.621 [2024-11-05 09:40:24.367353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:7214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.367383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.381954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e88f8 00:17:38.621 [2024-11-05 09:40:24.384154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.384191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.398756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e9168 00:17:38.621 [2024-11-05 09:40:24.400980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.401018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.415692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e99d8 00:17:38.621 [2024-11-05 09:40:24.417970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:7239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.418009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.432551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ea248 00:17:38.621 [2024-11-05 09:40:24.434733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:8082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.434769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.449415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eaab8 00:17:38.621 [2024-11-05 09:40:24.451572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.451607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.466500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eb328 00:17:38.621 [2024-11-05 09:40:24.468616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:8845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.468653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.483364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ebb98 00:17:38.621 [2024-11-05 09:40:24.485607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.621 [2024-11-05 09:40:24.485644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:38.621 [2024-11-05 09:40:24.500354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ec408 00:17:38.621 [2024-11-05 09:40:24.502419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.622 [2024-11-05 09:40:24.502455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:38.622 [2024-11-05 09:40:24.517179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ecc78 00:17:38.622 [2024-11-05 09:40:24.519216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.622 [2024-11-05 09:40:24.519381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:38.622 [2024-11-05 09:40:24.534133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ed4e8 00:17:38.622 [2024-11-05 09:40:24.536285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.622 [2024-11-05 09:40:24.536325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:38.622 [2024-11-05 09:40:24.551113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166edd58 00:17:38.622 [2024-11-05 09:40:24.553130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:22287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.622 [2024-11-05 09:40:24.553167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:38.622 [2024-11-05 09:40:24.567879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ee5c8 00:17:38.622 [2024-11-05 09:40:24.569842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.622 [2024-11-05 09:40:24.569903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.584830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eee38 00:17:38.881 [2024-11-05 09:40:24.586809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.586861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.602572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166ef6a8 00:17:38.881 [2024-11-05 09:40:24.604530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.604573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.619504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166eff18 00:17:38.881 [2024-11-05 09:40:24.621452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.621493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.636386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f0788 00:17:38.881 [2024-11-05 09:40:24.638441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.638481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.653359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f0ff8 00:17:38.881 [2024-11-05 09:40:24.655245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.655282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.670164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f1868 00:17:38.881 [2024-11-05 09:40:24.672020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.672190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.687125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f20d8 00:17:38.881 [2024-11-05 09:40:24.689111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.689150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.704069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f2948 00:17:38.881 [2024-11-05 09:40:24.705898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.705935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.720998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f31b8 00:17:38.881 [2024-11-05 09:40:24.722781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.722820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.737833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f3a28 00:17:38.881 [2024-11-05 09:40:24.739617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.739656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.754641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f4298 00:17:38.881 [2024-11-05 09:40:24.756405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:18651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.756441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.771476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f4b08 00:17:38.881 [2024-11-05 09:40:24.773220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.773256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.788265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f5378 00:17:38.881 [2024-11-05 09:40:24.789983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.790015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.805164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f5be8 00:17:38.881 [2024-11-05 09:40:24.806835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.806893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.821968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f6458 00:17:38.881 [2024-11-05 09:40:24.823623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:38.881 [2024-11-05 09:40:24.823671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:38.881 [2024-11-05 09:40:24.838885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f6cc8 00:17:39.141 [2024-11-05 09:40:24.840525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.141 [2024-11-05 09:40:24.840559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:39.141 [2024-11-05 09:40:24.856028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f7538 00:17:39.141 [2024-11-05 09:40:24.857673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:24634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.141 [2024-11-05 09:40:24.857712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:39.141 [2024-11-05 09:40:24.873076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f7da8 00:17:39.141 [2024-11-05 09:40:24.874675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:23480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.141 [2024-11-05 09:40:24.874725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:39.141 [2024-11-05 09:40:24.890066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f8618 00:17:39.141 [2024-11-05 09:40:24.891665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.141 [2024-11-05 09:40:24.891714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:39.141 [2024-11-05 09:40:24.906892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f8e88 00:17:39.141 [2024-11-05 09:40:24.908452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:18189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.141 [2024-11-05 09:40:24.908498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:39.141 [2024-11-05 09:40:24.923733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166f96f8 00:17:39.141 [2024-11-05 09:40:24.925341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:9530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.141 [2024-11-05 09:40:24.925388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:39.141 14991.50 IOPS, 58.56 MiB/s [2024-11-05T09:40:25.099Z] [2024-11-05 09:40:24.942868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93c750) with pdu=0x2000166e9168 00:17:39.141 [2024-11-05 09:40:24.944773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:39.141 [2024-11-05 09:40:24.944806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:39.141 00:17:39.141 Latency(us) 00:17:39.141 [2024-11-05T09:40:25.099Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.141 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:39.141 nvme0n1 : 2.01 15012.56 58.64 0.00 0.00 8516.82 4349.21 32410.53 00:17:39.141 [2024-11-05T09:40:25.099Z] =================================================================================================================== 00:17:39.141 [2024-11-05T09:40:25.099Z] Total : 15012.56 58.64 0.00 0.00 8516.82 4349.21 32410.53 00:17:39.141 { 00:17:39.141 "results": [ 00:17:39.141 { 00:17:39.141 "job": "nvme0n1", 00:17:39.141 "core_mask": "0x2", 00:17:39.141 "workload": "randwrite", 00:17:39.141 "status": "finished", 00:17:39.141 "queue_depth": 128, 00:17:39.141 "io_size": 4096, 00:17:39.141 "runtime": 2.009051, 00:17:39.141 "iops": 15012.560656747888, 00:17:39.141 "mibps": 58.64281506542144, 00:17:39.141 "io_failed": 0, 00:17:39.141 "io_timeout": 0, 00:17:39.141 "avg_latency_us": 8516.816746973063, 00:17:39.141 "min_latency_us": 4349.2072727272725, 00:17:39.141 "max_latency_us": 32410.53090909091 00:17:39.141 } 00:17:39.141 ], 00:17:39.141 "core_count": 1 00:17:39.141 } 00:17:39.141 09:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:39.141 09:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:39.141 09:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:39.141 | .driver_specific 00:17:39.141 | .nvme_error 00:17:39.141 | .status_code 00:17:39.141 | .command_transient_transport_error' 00:17:39.141 09:40:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 118 > 0 )) 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80098 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80098 ']' 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80098 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80098 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:39.400 killing process with pid 80098 00:17:39.400 Received shutdown signal, test time was about 2.000000 seconds 00:17:39.400 00:17:39.400 Latency(us) 00:17:39.400 [2024-11-05T09:40:25.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.400 [2024-11-05T09:40:25.358Z] =================================================================================================================== 00:17:39.400 [2024-11-05T09:40:25.358Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80098' 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80098 00:17:39.400 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80098 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=80151 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 80151 /var/tmp/bperf.sock 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 80151 ']' 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:39.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:39.658 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:39.658 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:39.658 Zero copy mechanism will not be used. 00:17:39.658 [2024-11-05 09:40:25.460685] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:39.658 [2024-11-05 09:40:25.460765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80151 ] 00:17:39.658 [2024-11-05 09:40:25.604128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.917 [2024-11-05 09:40:25.637265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:39.917 [2024-11-05 09:40:25.667571] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:39.917 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:39.917 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:17:39.917 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:39.917 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:40.176 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:40.176 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.176 09:40:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.176 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.176 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.176 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:40.435 nvme0n1 00:17:40.435 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:40.435 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.435 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:40.435 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.435 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:40.435 09:40:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:40.695 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:40.695 Zero copy mechanism will not be used. 00:17:40.695 Running I/O for 2 seconds... 00:17:40.695 [2024-11-05 09:40:26.449285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.695 [2024-11-05 09:40:26.449620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-05 09:40:26.449653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-05 09:40:26.454594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.695 [2024-11-05 09:40:26.454926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-05 09:40:26.454952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-05 09:40:26.459815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.695 [2024-11-05 09:40:26.460152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-05 09:40:26.460184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-05 09:40:26.465221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.695 [2024-11-05 09:40:26.465532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-05 09:40:26.465563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.695 [2024-11-05 09:40:26.470447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.695 [2024-11-05 09:40:26.470919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-05 09:40:26.470954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.695 [2024-11-05 09:40:26.475827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.695 [2024-11-05 09:40:26.476161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-05 09:40:26.476192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.695 [2024-11-05 09:40:26.481063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.695 [2024-11-05 09:40:26.481372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.695 [2024-11-05 09:40:26.481403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.695 [2024-11-05 09:40:26.486298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.486742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.486766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.491693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.492030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.492066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.496975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.497283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.497313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.502220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.502545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.502575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.507522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.507847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.507888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.512714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.513044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.513074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.517987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.518294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.518324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.523187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.523513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.523543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.528397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.528722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.528753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.533670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.534121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.534146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.539056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.539380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.539410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.544363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.544673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.544703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.549627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.550080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.550105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.555070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.555382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.555411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.560323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.560645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.560676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.565647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.566124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.566149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.571037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.571347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.571376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.576303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.576626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.576657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.581552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.582006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.582031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.586946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.587259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.587288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.592168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.592489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.592519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.597426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.597867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.597908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.602811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.603147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.603177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.608117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.608441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.608471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.613435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.613756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.613786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.618724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.619064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.619093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.623996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.624303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.696 [2024-11-05 09:40:26.624332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.696 [2024-11-05 09:40:26.629265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.696 [2024-11-05 09:40:26.629575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-05 09:40:26.629605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.697 [2024-11-05 09:40:26.634507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.697 [2024-11-05 09:40:26.634814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-05 09:40:26.634856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.697 [2024-11-05 09:40:26.639688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.697 [2024-11-05 09:40:26.640136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-05 09:40:26.640161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.697 [2024-11-05 09:40:26.645094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.697 [2024-11-05 09:40:26.645402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-05 09:40:26.645438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.697 [2024-11-05 09:40:26.650322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.697 [2024-11-05 09:40:26.650635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.697 [2024-11-05 09:40:26.650665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.655594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.656038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.656063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.660997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.661310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.661339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.666222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.666530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.666559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.671452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.671896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.671922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.676852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.677171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.677201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.682082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.682391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.682421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.687357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.687781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.687806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.692744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.693078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.693107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.698079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.698390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.698420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.703368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.703805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.703830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.708673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.709020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.709050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.714007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.714314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.714343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.719212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.719638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.719662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.724544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.724890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.724920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.729804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.730147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.730177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.735077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.735400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.957 [2024-11-05 09:40:26.735429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.957 [2024-11-05 09:40:26.740294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.957 [2024-11-05 09:40:26.740620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.740649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.745530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.745849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.745878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.750804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.751280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.751305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.756195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.756519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.756549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.761463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.761785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.761815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.766670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.767129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.767154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.772118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.772426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.772455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.777318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.777630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.777661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.782565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.783027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.783052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.787935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.788260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.788290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.793208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.793517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.793546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.798488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.798926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.798959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.803906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.804232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.804262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.809204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.809517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.809547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.814449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.814901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.814925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.819808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.820150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.820180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.825128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.825439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.825468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.830346] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.830786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.830810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.835707] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.836049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.836081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.840960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.841281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.841311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.846177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.846502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.846532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.851431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.851754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.851784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.856695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.857170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.857195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.862087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.862411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.862441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.867286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.867624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.867655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.872509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.958 [2024-11-05 09:40:26.872977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.958 [2024-11-05 09:40:26.873001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.958 [2024-11-05 09:40:26.877939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.878250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.878280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-05 09:40:26.883176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.883483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.883513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-05 09:40:26.888444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.888882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.888912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-05 09:40:26.893483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.893569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.893596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:40.959 [2024-11-05 09:40:26.898751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.898837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.898864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:40.959 [2024-11-05 09:40:26.903998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.904087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.904113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:40.959 [2024-11-05 09:40:26.909327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.909414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.909439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:40.959 [2024-11-05 09:40:26.914619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:40.959 [2024-11-05 09:40:26.914705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:40.959 [2024-11-05 09:40:26.914729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.219 [2024-11-05 09:40:26.919902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.219 [2024-11-05 09:40:26.919983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.219 [2024-11-05 09:40:26.920008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.219 [2024-11-05 09:40:26.925086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.219 [2024-11-05 09:40:26.925173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.219 [2024-11-05 09:40:26.925198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.930315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.930416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.930440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.935500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.935732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.935756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.940977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.941055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.941082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.946199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.946274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.946300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.951378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.951568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.951593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.956721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.956810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.956851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.962015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.962086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.962121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.967210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.967285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.967311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.972393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.972464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.972491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.977602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.977794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.977819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.982975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.983054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.983081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.988208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.988275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.988300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.993401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.993475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.993500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:26.998593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:26.998663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:26.998688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.003789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.003873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.003898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.009035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.009115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.009140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.014237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.014311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.014337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.019394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.019481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.019505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.024582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.024775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.024799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.029995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.030078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.030103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.035190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.035263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.035288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.040430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.040618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.040643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.045761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.045850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.045875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.050909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.050991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.220 [2024-11-05 09:40:27.051014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.220 [2024-11-05 09:40:27.056085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.220 [2024-11-05 09:40:27.056166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.056190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.061313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.061396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.061420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.066494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.066567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.066592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.071712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.071788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.071811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.076950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.077039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.077063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.082136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.082208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.082232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.087318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.087398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.087421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.092519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.092709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.092733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.097917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.097984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.098008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.103087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.103163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.103188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.108285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.108479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.108503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.113688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.113771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.113795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.118926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.119001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.119025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.124125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.124198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.124222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.129321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.129392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.129416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.134499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.134689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.134713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.139865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.139945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.139968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.145053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.145132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.145156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.150315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.150516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.150541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.155608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.155676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.155701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.160833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.160931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.160955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.166139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.166225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.166249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.171375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.171463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.171488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.221 [2024-11-05 09:40:27.176677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.221 [2024-11-05 09:40:27.176751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.221 [2024-11-05 09:40:27.176776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.482 [2024-11-05 09:40:27.181934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.482 [2024-11-05 09:40:27.182008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.482 [2024-11-05 09:40:27.182031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.187178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.187274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.187299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.192418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.192507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.192532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.197670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.197909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.197934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.203046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.203131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.203155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.208252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.208351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.208374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.213477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.213690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.213714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.218937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.219023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.219047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.224210] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.224323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.224347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.229441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.229645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.229670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.234801] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.234901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.234926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.240015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.240103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.240127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.245234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.245307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.245332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.250493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.250582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.250605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.255746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.255820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.255860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.261029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.261103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.261127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.266280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.266362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.266386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.271495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.271570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.271594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.276727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.276930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.276953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.282105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.282179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.282202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.287286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.287369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.287393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.292500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.292690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.292714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.297817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.297923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.297948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.303042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.303123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.303147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.308317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.483 [2024-11-05 09:40:27.308522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.483 [2024-11-05 09:40:27.308545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.483 [2024-11-05 09:40:27.313791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.313898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.313923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.319136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.319210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.319234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.324351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.324447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.324472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.329631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.329726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.329749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.334938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.335012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.335036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.340147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.340221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.340245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.345338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.345416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.345440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.350547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.350751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.350774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.355897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.356047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.356070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.361060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.361143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.361167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.366324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.366534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.366559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.371738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.371826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.371850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.376911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.377003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.377027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.382178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.382266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.382290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.387390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.387478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.387502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.392629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.392830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.392867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.397981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.398069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.398093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.403202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.403310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.403333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.408388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.408593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.408617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.413758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.413846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.413885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.419004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.419099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.419123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.424279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.424352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.424376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.429534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.429608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.429632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.434799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.435015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.435038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.484 [2024-11-05 09:40:27.440190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.484 [2024-11-05 09:40:27.440264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.484 [2024-11-05 09:40:27.440288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.745 5877.00 IOPS, 734.62 MiB/s [2024-11-05T09:40:27.703Z] [2024-11-05 09:40:27.446863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.446953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.446978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.452127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.452204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.452228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.457362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.457460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.457484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.462620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.462825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.462848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.468057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.468136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.468159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.473321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.473412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.473436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.478525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.478730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.478754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.483848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.483960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.483984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.489080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.489154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.489178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.494284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.494489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.494513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.499739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.499827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.745 [2024-11-05 09:40:27.499851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.745 [2024-11-05 09:40:27.504986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.745 [2024-11-05 09:40:27.505062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.505086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.510174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.510255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.510279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.515380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.515472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.515496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.520619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.520822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.520846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.525802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.525899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.525923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.531024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.531101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.531125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.536189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.536278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.536302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.541445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.541533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.541557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.546660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.546748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.546772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.551866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.551944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.551968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.557123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.557209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.557233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.562312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.562414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.562438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.567505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.567710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.567733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.572835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.572949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.572983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.578080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.578176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.578201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.583318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.583524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.583547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.588619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.588708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.588731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.593884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.593981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.594005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.599152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.599229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.599253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.604411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.604499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.604524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.609661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.609898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.609923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.614997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.615071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.615095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.620197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.620284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.620308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.625484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.625688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.625712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.630855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.630954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.746 [2024-11-05 09:40:27.630979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.746 [2024-11-05 09:40:27.636146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.746 [2024-11-05 09:40:27.636220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.636243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.641434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.641648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.641672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.646886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.646990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.647014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.652107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.652206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.652230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.657325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.657530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.657554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.662721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.662809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.662833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.667942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.668030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.668054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.673179] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.673264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.673287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.678384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.678475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.678500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.683644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.683872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.683896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.689030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.689102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.689126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.694178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.694253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.694277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:41.747 [2024-11-05 09:40:27.699335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:41.747 [2024-11-05 09:40:27.699537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:41.747 [2024-11-05 09:40:27.699562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.007 [2024-11-05 09:40:27.704704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.007 [2024-11-05 09:40:27.704777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.007 [2024-11-05 09:40:27.704801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.709951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.710035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.710059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.715133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.715214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.715238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.720323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.720393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.720417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.725529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.725718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.725742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.730914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.730987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.731011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.736128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.736206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.736230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.741370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.741560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.741584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.746703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.746777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.746800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.751920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.752000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.752023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.757136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.757215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.757240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.762425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.762527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.762551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.767699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.767787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.767810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.772957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.773038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.773062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.778132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.778202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.778226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.783341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.783432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.783455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.788555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.788760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.788784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.794019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.794093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.794118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.799260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.799362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.799385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.804548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.008 [2024-11-05 09:40:27.804763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.008 [2024-11-05 09:40:27.804787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.008 [2024-11-05 09:40:27.809977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.810061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.810085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.815227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.815315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.815339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.820428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.820639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.820663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.825763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.825851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.825889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.830897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.830990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.831014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.836194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.836277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.836301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.841431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.841515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.841539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.846726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.846814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.846837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.852011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.852092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.852116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.857259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.857365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.857388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.862459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.862547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.862570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.867661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.867894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.867919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.873096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.873171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.873195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.878287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.878376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.878400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.883527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.883731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.883755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.888859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.888952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.888985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.894123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.894211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.894234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.899375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.899578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.899602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.904702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.904776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.904800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.910002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.910076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.910103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.915218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.009 [2024-11-05 09:40:27.915306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.009 [2024-11-05 09:40:27.915332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.009 [2024-11-05 09:40:27.920469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.920571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.920597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.925820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.925928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.925954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.931127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.931206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.931231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.936436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.936551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.936576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.941793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.941892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.941917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.947068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.947165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.947190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.952316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.952394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.952418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.957546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.957619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.957643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.010 [2024-11-05 09:40:27.962787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.010 [2024-11-05 09:40:27.963002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.010 [2024-11-05 09:40:27.963025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:27.968237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:27.968342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:27.968366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:27.973532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:27.973621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:27.973645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:27.978742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:27.978962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:27.978987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:27.984141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:27.984215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:27.984239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:27.989377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:27.989464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:27.989488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:27.994619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:27.994823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:27.994847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:27.999925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:27.999995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:28.000020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:28.005159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:28.005232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:28.005262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:28.010412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.270 [2024-11-05 09:40:28.010616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.270 [2024-11-05 09:40:28.010639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.270 [2024-11-05 09:40:28.015759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.015871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.015895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.020915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.021004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.021028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.026085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.026173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.026197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.031245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.031336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.031360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.036490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.036578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.036602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.041694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.041908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.041932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.047030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.047126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.047150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.052220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.052308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.057445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.057657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.057681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.062760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.062848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.062888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.068016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.068099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.068123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.073196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.073271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.073295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.078380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.078482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.078506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.083613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.083702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.083726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.088806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.088914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.088938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.094029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.094120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.094144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.099278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.099373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.099397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.104523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.104611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.104635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.109781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.110000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.110024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.115121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.115195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.115218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.120359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.120433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.120457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.125593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.125783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.125807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.130960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.131054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.131081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.136191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.136275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.136300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.141369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.141570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.141596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.146761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.146860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.146887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.151951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.152033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.271 [2024-11-05 09:40:28.152056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.271 [2024-11-05 09:40:28.157192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.271 [2024-11-05 09:40:28.157274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.157299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.162384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.162462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.162486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.167522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.167607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.167631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.172738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.172809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.172848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.177981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.178049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.178073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.183128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.183199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.183223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.188292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.188372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.188396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.193495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.193696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.193720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.198897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.198966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.198991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.204052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.204120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.204145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.209203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.209391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.209416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.214555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.214637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.214661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.219798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.219887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.219911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.272 [2024-11-05 09:40:28.225051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.272 [2024-11-05 09:40:28.225126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.272 [2024-11-05 09:40:28.225151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.532 [2024-11-05 09:40:28.230237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.532 [2024-11-05 09:40:28.230310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.532 [2024-11-05 09:40:28.230334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.532 [2024-11-05 09:40:28.235406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.532 [2024-11-05 09:40:28.235485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.532 [2024-11-05 09:40:28.235509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.532 [2024-11-05 09:40:28.240604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.532 [2024-11-05 09:40:28.240807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.532 [2024-11-05 09:40:28.240830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.532 [2024-11-05 09:40:28.245927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.532 [2024-11-05 09:40:28.246009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.532 [2024-11-05 09:40:28.246032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.532 [2024-11-05 09:40:28.251142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.532 [2024-11-05 09:40:28.251213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.532 [2024-11-05 09:40:28.251238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.532 [2024-11-05 09:40:28.256345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.532 [2024-11-05 09:40:28.256544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.532 [2024-11-05 09:40:28.256568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.261720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.261795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.261820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.266945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.267020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.267045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.272120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.272199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.272222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.277337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.277429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.277453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.282528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.282725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.282749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.287898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.287970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.287994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.293125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.293199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.293223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.298313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.298386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.298410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.303496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.303577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.303601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.308722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.308800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.308832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.313983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.314065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.314090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.319215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.319296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.319321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.324431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.324510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.324534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.329659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.329788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.329812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.334998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.335070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.335095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.340208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.340289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.340314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.345525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.345739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.345763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.350877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.350968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.350991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.356050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.356136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.356160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.361267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.361380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.361404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.366595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.366690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.366714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.371852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.371924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.371948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.377016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.377086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.377110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.382229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.382312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.382336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.387497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.387590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.387614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.392806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.393031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.393055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.398162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.533 [2024-11-05 09:40:28.398250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.533 [2024-11-05 09:40:28.398274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.533 [2024-11-05 09:40:28.403437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.403532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.403556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.534 [2024-11-05 09:40:28.408676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.408903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.408927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.534 [2024-11-05 09:40:28.414039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.414128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.414152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.534 [2024-11-05 09:40:28.419310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.419406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.419430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.534 [2024-11-05 09:40:28.424570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.424782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.424806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.534 [2024-11-05 09:40:28.429984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.430090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.430114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:42.534 [2024-11-05 09:40:28.435196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.435310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.435333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:42.534 [2024-11-05 09:40:28.440442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.440657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.440681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:42.534 5881.50 IOPS, 735.19 MiB/s [2024-11-05T09:40:28.492Z] [2024-11-05 09:40:28.447006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x93ca90) with pdu=0x2000166fef90 00:17:42.534 [2024-11-05 09:40:28.447209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:42.534 [2024-11-05 09:40:28.447485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:42.534 00:17:42.534 Latency(us) 00:17:42.534 [2024-11-05T09:40:28.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.534 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:42.534 nvme0n1 : 2.00 5877.90 734.74 0.00 0.00 2715.17 1377.75 6911.07 00:17:42.534 [2024-11-05T09:40:28.492Z] =================================================================================================================== 00:17:42.534 [2024-11-05T09:40:28.492Z] Total : 5877.90 734.74 0.00 0.00 2715.17 1377.75 6911.07 00:17:42.534 { 00:17:42.534 "results": [ 00:17:42.534 { 00:17:42.534 "job": "nvme0n1", 00:17:42.534 "core_mask": "0x2", 00:17:42.534 "workload": "randwrite", 00:17:42.534 "status": "finished", 00:17:42.534 "queue_depth": 16, 00:17:42.534 "io_size": 131072, 00:17:42.534 "runtime": 2.004799, 00:17:42.534 "iops": 5877.895988575413, 00:17:42.534 "mibps": 734.7369985719266, 00:17:42.534 "io_failed": 0, 00:17:42.534 "io_timeout": 0, 00:17:42.534 "avg_latency_us": 2715.1704746034684, 00:17:42.534 "min_latency_us": 1377.7454545454545, 00:17:42.534 "max_latency_us": 6911.069090909091 00:17:42.534 } 00:17:42.534 ], 00:17:42.534 "core_count": 1 00:17:42.534 } 00:17:42.534 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:42.534 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:42.534 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:42.534 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:42.534 | .driver_specific 00:17:42.534 | .nvme_error 00:17:42.534 | .status_code 00:17:42.534 | .command_transient_transport_error' 00:17:43.102 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 380 > 0 )) 00:17:43.102 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 80151 00:17:43.102 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 80151 ']' 00:17:43.102 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 80151 00:17:43.102 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:43.102 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80151 00:17:43.103 killing process with pid 80151 00:17:43.103 Received shutdown signal, test time was about 2.000000 seconds 00:17:43.103 00:17:43.103 Latency(us) 00:17:43.103 [2024-11-05T09:40:29.061Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.103 [2024-11-05T09:40:29.061Z] =================================================================================================================== 00:17:43.103 [2024-11-05T09:40:29.061Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80151' 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 80151 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 80151 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 79978 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 79978 ']' 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 79978 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79978 00:17:43.103 killing process with pid 79978 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79978' 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 79978 00:17:43.103 09:40:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 79978 00:17:43.362 ************************************ 00:17:43.362 END TEST nvmf_digest_error 00:17:43.362 00:17:43.362 real 0m14.943s 00:17:43.362 user 0m29.507s 00:17:43.362 sys 0m4.210s 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:17:43.362 ************************************ 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:43.362 rmmod nvme_tcp 00:17:43.362 rmmod nvme_fabrics 00:17:43.362 rmmod nvme_keyring 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 79978 ']' 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 79978 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 79978 ']' 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 79978 00:17:43.362 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (79978) - No such process 00:17:43.362 Process with pid 79978 is not found 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 79978 is not found' 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:43.362 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.620 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:43.620 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:43.620 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:43.620 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:43.620 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:17:43.621 ************************************ 00:17:43.621 END TEST nvmf_digest 00:17:43.621 ************************************ 00:17:43.621 00:17:43.621 real 0m31.383s 00:17:43.621 user 1m0.377s 00:17:43.621 sys 0m8.924s 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:43.621 ************************************ 00:17:43.621 START TEST nvmf_host_multipath 00:17:43.621 ************************************ 00:17:43.621 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:43.878 * Looking for test storage... 00:17:43.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:43.878 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:43.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.879 --rc genhtml_branch_coverage=1 00:17:43.879 --rc genhtml_function_coverage=1 00:17:43.879 --rc genhtml_legend=1 00:17:43.879 --rc geninfo_all_blocks=1 00:17:43.879 --rc geninfo_unexecuted_blocks=1 00:17:43.879 00:17:43.879 ' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:43.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.879 --rc genhtml_branch_coverage=1 00:17:43.879 --rc genhtml_function_coverage=1 00:17:43.879 --rc genhtml_legend=1 00:17:43.879 --rc geninfo_all_blocks=1 00:17:43.879 --rc geninfo_unexecuted_blocks=1 00:17:43.879 00:17:43.879 ' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:43.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.879 --rc genhtml_branch_coverage=1 00:17:43.879 --rc genhtml_function_coverage=1 00:17:43.879 --rc genhtml_legend=1 00:17:43.879 --rc geninfo_all_blocks=1 00:17:43.879 --rc geninfo_unexecuted_blocks=1 00:17:43.879 00:17:43.879 ' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:43.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:43.879 --rc genhtml_branch_coverage=1 00:17:43.879 --rc genhtml_function_coverage=1 00:17:43.879 --rc genhtml_legend=1 00:17:43.879 --rc geninfo_all_blocks=1 00:17:43.879 --rc geninfo_unexecuted_blocks=1 00:17:43.879 00:17:43.879 ' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:43.879 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:43.879 Cannot find device "nvmf_init_br" 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:43.879 Cannot find device "nvmf_init_br2" 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:43.879 Cannot find device "nvmf_tgt_br" 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:43.879 Cannot find device "nvmf_tgt_br2" 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:17:43.879 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:43.879 Cannot find device "nvmf_init_br" 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:44.137 Cannot find device "nvmf_init_br2" 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:44.137 Cannot find device "nvmf_tgt_br" 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:44.137 Cannot find device "nvmf_tgt_br2" 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:44.137 Cannot find device "nvmf_br" 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:44.137 Cannot find device "nvmf_init_if" 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:44.137 Cannot find device "nvmf_init_if2" 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:44.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:44.137 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:44.137 09:40:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:44.137 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:44.396 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:44.397 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:44.397 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:17:44.397 00:17:44.397 --- 10.0.0.3 ping statistics --- 00:17:44.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.397 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:44.397 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:44.397 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.065 ms 00:17:44.397 00:17:44.397 --- 10.0.0.4 ping statistics --- 00:17:44.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.397 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:44.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:17:44.397 00:17:44.397 --- 10.0.0.1 ping statistics --- 00:17:44.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.397 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:44.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:17:44.397 00:17:44.397 --- 10.0.0.2 ping statistics --- 00:17:44.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.397 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:44.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=80461 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 80461 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80461 ']' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:44.397 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:44.397 [2024-11-05 09:40:30.216676] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:17:44.397 [2024-11-05 09:40:30.216779] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.656 [2024-11-05 09:40:30.369996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:44.656 [2024-11-05 09:40:30.409202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.656 [2024-11-05 09:40:30.409438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.656 [2024-11-05 09:40:30.409595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.656 [2024-11-05 09:40:30.409664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.656 [2024-11-05 09:40:30.409802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.656 [2024-11-05 09:40:30.413879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.656 [2024-11-05 09:40:30.413921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.656 [2024-11-05 09:40:30.448152] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=80461 00:17:44.656 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.915 [2024-11-05 09:40:30.830618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.915 09:40:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:45.481 Malloc0 00:17:45.481 09:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:45.739 09:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:45.998 09:40:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:46.256 [2024-11-05 09:40:32.078019] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:46.256 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:17:46.515 [2024-11-05 09:40:32.350148] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:17:46.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=80509 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 80509 /var/tmp/bdevperf.sock 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@833 -- # '[' -z 80509 ']' 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:46.515 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:46.773 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:46.773 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@866 -- # return 0 00:17:46.773 09:40:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:47.339 09:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:47.597 Nvme0n1 00:17:47.597 09:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:47.855 Nvme0n1 00:17:48.113 09:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:17:48.113 09:40:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:49.049 09:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:49.049 09:40:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:49.307 09:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:17:49.566 09:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:49.566 09:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80557 00:17:49.566 09:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:49.566 09:40:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:56.128 Attaching 4 probes... 00:17:56.128 @path[10.0.0.3, 4421]: 16849 00:17:56.128 @path[10.0.0.3, 4421]: 17414 00:17:56.128 @path[10.0.0.3, 4421]: 17478 00:17:56.128 @path[10.0.0.3, 4421]: 17510 00:17:56.128 @path[10.0.0.3, 4421]: 17230 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80557 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:56.128 09:40:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:17:56.387 09:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:17:56.645 09:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:56.645 09:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80666 00:17:56.645 09:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:17:56.645 09:40:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.205 Attaching 4 probes... 00:18:03.205 @path[10.0.0.3, 4420]: 17047 00:18:03.205 @path[10.0.0.3, 4420]: 17352 00:18:03.205 @path[10.0.0.3, 4420]: 17387 00:18:03.205 @path[10.0.0.3, 4420]: 17275 00:18:03.205 @path[10.0.0.3, 4420]: 17321 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80666 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:18:03.205 09:40:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:03.205 09:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:03.463 09:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:18:03.463 09:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80784 00:18:03.463 09:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:03.463 09:40:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:10.064 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:10.064 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.065 Attaching 4 probes... 00:18:10.065 @path[10.0.0.3, 4421]: 13653 00:18:10.065 @path[10.0.0.3, 4421]: 16929 00:18:10.065 @path[10.0.0.3, 4421]: 17054 00:18:10.065 @path[10.0.0.3, 4421]: 16883 00:18:10.065 @path[10.0.0.3, 4421]: 17034 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80784 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:18:10.065 09:40:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:18:10.323 09:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:18:10.323 09:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=80902 00:18:10.323 09:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:10.323 09:40:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.916 Attaching 4 probes... 00:18:16.916 00:18:16.916 00:18:16.916 00:18:16.916 00:18:16.916 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 80902 00:18:16.916 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:16.917 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:18:16.917 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:18:16.917 09:41:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:17.174 09:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:18:17.174 09:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81019 00:18:17.174 09:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:17.174 09:41:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.736 Attaching 4 probes... 00:18:23.736 @path[10.0.0.3, 4421]: 16729 00:18:23.736 @path[10.0.0.3, 4421]: 16564 00:18:23.736 @path[10.0.0.3, 4421]: 16878 00:18:23.736 @path[10.0.0.3, 4421]: 16976 00:18:23.736 @path[10.0.0.3, 4421]: 16897 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81019 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:23.736 09:41:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:18:25.112 09:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:18:25.112 09:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81138 00:18:25.112 09:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:25.112 09:41:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:31.705 09:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:31.705 09:41:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:31.705 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:18:31.705 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:31.705 Attaching 4 probes... 00:18:31.705 @path[10.0.0.3, 4420]: 16502 00:18:31.705 @path[10.0.0.3, 4420]: 16868 00:18:31.705 @path[10.0.0.3, 4420]: 16801 00:18:31.705 @path[10.0.0.3, 4420]: 16832 00:18:31.705 @path[10.0.0.3, 4420]: 16871 00:18:31.705 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:31.705 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:31.705 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81138 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:18:31.706 [2024-11-05 09:41:17.312833] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:18:31.706 09:41:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:18:38.273 09:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:38.273 09:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=81317 00:18:38.273 09:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80461 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:38.273 09:41:23 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:44.857 Attaching 4 probes... 00:18:44.857 @path[10.0.0.3, 4421]: 16447 00:18:44.857 @path[10.0.0.3, 4421]: 17248 00:18:44.857 @path[10.0.0.3, 4421]: 17234 00:18:44.857 @path[10.0.0.3, 4421]: 17432 00:18:44.857 @path[10.0.0.3, 4421]: 17152 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 81317 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 80509 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80509 ']' 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80509 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80509 00:18:44.857 killing process with pid 80509 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80509' 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80509 00:18:44.857 09:41:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80509 00:18:44.857 { 00:18:44.857 "results": [ 00:18:44.857 { 00:18:44.857 "job": "Nvme0n1", 00:18:44.857 "core_mask": "0x4", 00:18:44.857 "workload": "verify", 00:18:44.857 "status": "terminated", 00:18:44.857 "verify_range": { 00:18:44.857 "start": 0, 00:18:44.857 "length": 16384 00:18:44.857 }, 00:18:44.857 "queue_depth": 128, 00:18:44.857 "io_size": 4096, 00:18:44.857 "runtime": 56.018327, 00:18:44.857 "iops": 7257.410597071205, 00:18:44.857 "mibps": 28.349260144809396, 00:18:44.857 "io_failed": 0, 00:18:44.857 "io_timeout": 0, 00:18:44.857 "avg_latency_us": 17603.49282793399, 00:18:44.857 "min_latency_us": 867.6072727272727, 00:18:44.857 "max_latency_us": 7046430.72 00:18:44.857 } 00:18:44.857 ], 00:18:44.857 "core_count": 1 00:18:44.857 } 00:18:44.857 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 80509 00:18:44.857 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:44.857 [2024-11-05 09:40:32.420285] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:18:44.857 [2024-11-05 09:40:32.420387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80509 ] 00:18:44.857 [2024-11-05 09:40:32.569007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.857 [2024-11-05 09:40:32.602540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:44.857 [2024-11-05 09:40:32.632950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:44.857 Running I/O for 90 seconds... 00:18:44.857 6676.00 IOPS, 26.08 MiB/s [2024-11-05T09:41:30.815Z] 7577.00 IOPS, 29.60 MiB/s [2024-11-05T09:41:30.815Z] 7930.00 IOPS, 30.98 MiB/s [2024-11-05T09:41:30.815Z] 8121.25 IOPS, 31.72 MiB/s [2024-11-05T09:41:30.815Z] 8244.20 IOPS, 32.20 MiB/s [2024-11-05T09:41:30.815Z] 8331.50 IOPS, 32.54 MiB/s [2024-11-05T09:41:30.815Z] 8372.00 IOPS, 32.70 MiB/s [2024-11-05T09:41:30.815Z] 8412.50 IOPS, 32.86 MiB/s [2024-11-05T09:41:30.815Z] [2024-11-05 09:40:42.432423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.432825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.432881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.432950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.432987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.433008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.433031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.433047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.433070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.433086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.433109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.433125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.433148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.433164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.433189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.857 [2024-11-05 09:40:42.433206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.433547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.433576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:44.857 [2024-11-05 09:40:42.433606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.857 [2024-11-05 09:40:42.433625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.433975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.433998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.434014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.434052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.434091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.434130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.434170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.434210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.434851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.858 [2024-11-05 09:40:42.434871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.858 [2024-11-05 09:40:42.435822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.858 [2024-11-05 09:40:42.435860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.435879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.435902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.435918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.435941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.435970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.435995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.436493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.436944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.436966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.437000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.437057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.437096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.437135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.437174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.859 [2024-11-05 09:40:42.437213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.437251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.437290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.437329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.437368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:44.859 [2024-11-05 09:40:42.437390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.859 [2024-11-05 09:40:42.437407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.437429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.437446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.437479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.437498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.437529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.437547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.438925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.438957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.438986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.860 [2024-11-05 09:40:42.439938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.439964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.439981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.440003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.440020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.440043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.440060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.440082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.440099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.440122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.440138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.440161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.440177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.440199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.440216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:44.860 [2024-11-05 09:40:42.440239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.860 [2024-11-05 09:40:42.440256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:44.860 8396.78 IOPS, 32.80 MiB/s [2024-11-05T09:41:30.818Z] 8420.30 IOPS, 32.89 MiB/s [2024-11-05T09:41:30.818Z] 8444.64 IOPS, 32.99 MiB/s [2024-11-05T09:41:30.818Z] 8466.25 IOPS, 33.07 MiB/s [2024-11-05T09:41:30.818Z] 8479.62 IOPS, 33.12 MiB/s [2024-11-05T09:41:30.818Z] 8492.21 IOPS, 33.17 MiB/s [2024-11-05T09:41:30.818Z] 8505.27 IOPS, 33.22 MiB/s [2024-11-05T09:41:30.818Z] [2024-11-05 09:40:49.022132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:106648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:106656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:106672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:106688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:106696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.022882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.022972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.022989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.023037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.023090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.023144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.023182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.023221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.861 [2024-11-05 09:40:49.023259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.023297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.023335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:106728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.023373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:106736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.023420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:106744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.023460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:106752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.861 [2024-11-05 09:40:49.023499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:44.861 [2024-11-05 09:40:49.023538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:106760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:106768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:106776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:106792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:106800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:106816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:106824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:106832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.023904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.023956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.023978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.023994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:106840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:106856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:106864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:106872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:106880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:106888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:106896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.862 [2024-11-05 09:40:49.024815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.024961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.862 [2024-11-05 09:40:49.024987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:44.862 [2024-11-05 09:40:49.025011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:107480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:107504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.025674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.025711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:106912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.025747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.025785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:106928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.025822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.025895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:106944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.025937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.025975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.025998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:106960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:106976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:106984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:106992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:107000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:107016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:107024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.863 [2024-11-05 09:40:49.026322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:107512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.026359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.026406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:107528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.026445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:107536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.026483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:44.863 [2024-11-05 09:40:49.026509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:107544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.863 [2024-11-05 09:40:49.026526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.026565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:107560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.026603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.026641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.026679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.026717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.026758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.026797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:107032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.026848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:107040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.026900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:107048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.026939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:107056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.026977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.026999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.027015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.027037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:107072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.027053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.027076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.027093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.027833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:107088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:49.027877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.027913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.027933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.027964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.027980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.028011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.028027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.028058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.028074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.028105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:107640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.028121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.028167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.028185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.028216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:107656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.028237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:49.028285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:49.028306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:44.864 8001.19 IOPS, 31.25 MiB/s [2024-11-05T09:41:30.822Z] 7996.94 IOPS, 31.24 MiB/s [2024-11-05T09:41:30.822Z] 8024.22 IOPS, 31.34 MiB/s [2024-11-05T09:41:30.822Z] 8049.47 IOPS, 31.44 MiB/s [2024-11-05T09:41:30.822Z] 8068.60 IOPS, 31.52 MiB/s [2024-11-05T09:41:30.822Z] 8090.86 IOPS, 31.60 MiB/s [2024-11-05T09:41:30.822Z] 8110.55 IOPS, 31.68 MiB/s [2024-11-05T09:41:30.822Z] [2024-11-05 09:40:56.200217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.864 [2024-11-05 09:40:56.200618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:56.200684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:56.200727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:44.864 [2024-11-05 09:40:56.200749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:1264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.864 [2024-11-05 09:40:56.200765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.200787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.200803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.200825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.200857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.200882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.200898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.200920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.200936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.200958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:1304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.200973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:1312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:1320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:1368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.201326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:1768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:1800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:1808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:1856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.201971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.201994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:1376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:1384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:1392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.865 [2024-11-05 09:40:56.202290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.865 [2024-11-05 09:40:56.202335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:44.865 [2024-11-05 09:40:56.202357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:1952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:1984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.202977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.202993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:2016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.203033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.203072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.203110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.866 [2024-11-05 09:40:56.203149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:1472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:1504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.866 [2024-11-05 09:40:56.203592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:44.866 [2024-11-05 09:40:56.203614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:1528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.203630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.203668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:1544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.203706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:1552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.203752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.203792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.203831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.203883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.203922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.203963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.203989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.204288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:1584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:1592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:1624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:1664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.204872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.204889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.205623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.867 [2024-11-05 09:40:56.205653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.205689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.205707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.205746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.205775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.205811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.205828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.205879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.205897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.205927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.205945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.205976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:2184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.867 [2024-11-05 09:40:56.205992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:44.867 [2024-11-05 09:40:56.206023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:2200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:2224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:40:56.206495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:40:56.206512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:44.868 7842.61 IOPS, 30.64 MiB/s [2024-11-05T09:41:30.826Z] 7515.83 IOPS, 29.36 MiB/s [2024-11-05T09:41:30.826Z] 7215.20 IOPS, 28.18 MiB/s [2024-11-05T09:41:30.826Z] 6937.69 IOPS, 27.10 MiB/s [2024-11-05T09:41:30.826Z] 6680.74 IOPS, 26.10 MiB/s [2024-11-05T09:41:30.826Z] 6442.14 IOPS, 25.16 MiB/s [2024-11-05T09:41:30.826Z] 6220.00 IOPS, 24.30 MiB/s [2024-11-05T09:41:30.826Z] 6227.13 IOPS, 24.32 MiB/s [2024-11-05T09:41:30.826Z] 6298.42 IOPS, 24.60 MiB/s [2024-11-05T09:41:30.826Z] 6362.91 IOPS, 24.86 MiB/s [2024-11-05T09:41:30.826Z] 6426.09 IOPS, 25.10 MiB/s [2024-11-05T09:41:30.826Z] 6485.79 IOPS, 25.34 MiB/s [2024-11-05T09:41:30.826Z] 6541.63 IOPS, 25.55 MiB/s [2024-11-05T09:41:30.826Z] [2024-11-05 09:41:09.675920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.675992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:44536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:44552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:44592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.868 [2024-11-05 09:41:09.676741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.868 [2024-11-05 09:41:09.676910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.868 [2024-11-05 09:41:09.676927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.676943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.676959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.676973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.676998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:44664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:44680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:44712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.869 [2024-11-05 09:41:09.677547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.677973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.677987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.678004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.678018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.678034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.678048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.678064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.678078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.678094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.678107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.678130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.678145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.678161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.678176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.869 [2024-11-05 09:41:09.678192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.869 [2024-11-05 09:41:09.678206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.678540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.678973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.678987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.679017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.870 [2024-11-05 09:41:09.679047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.870 [2024-11-05 09:41:09.679290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.870 [2024-11-05 09:41:09.679312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:44.871 [2024-11-05 09:41:09.679790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.679819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.679863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.679893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.679923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.679953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.679983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.679998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.680012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.680028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:44.871 [2024-11-05 09:41:09.680042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.680108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:44.871 [2024-11-05 09:41:09.680127] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:44.871 [2024-11-05 09:41:09.680139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44520 len:8 PRP1 0x0 PRP2 0x0 00:18:44.871 [2024-11-05 09:41:09.680162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.680287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.871 [2024-11-05 09:41:09.680314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.680331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.871 [2024-11-05 09:41:09.680350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.680366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.871 [2024-11-05 09:41:09.680379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.680394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:44.871 [2024-11-05 09:41:09.680408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:44.871 [2024-11-05 09:41:09.680422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ee50 is same with the state(6) to be set 00:18:44.871 [2024-11-05 09:41:09.681569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:44.871 [2024-11-05 09:41:09.681610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ee50 (9): Bad file descriptor 00:18:44.871 [2024-11-05 09:41:09.682002] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:44.871 [2024-11-05 09:41:09.682038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd4ee50 with addr=10.0.0.3, port=4421 00:18:44.871 [2024-11-05 09:41:09.682056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd4ee50 is same with the state(6) to be set 00:18:44.871 [2024-11-05 09:41:09.682244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ee50 (9): Bad file descriptor 00:18:44.871 [2024-11-05 09:41:09.682316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:18:44.871 [2024-11-05 09:41:09.682339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:18:44.871 [2024-11-05 09:41:09.682355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:18:44.871 [2024-11-05 09:41:09.682370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:18:44.871 [2024-11-05 09:41:09.682386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:18:44.871 6592.42 IOPS, 25.75 MiB/s [2024-11-05T09:41:30.829Z] 6643.11 IOPS, 25.95 MiB/s [2024-11-05T09:41:30.829Z] 6686.71 IOPS, 26.12 MiB/s [2024-11-05T09:41:30.829Z] 6731.67 IOPS, 26.30 MiB/s [2024-11-05T09:41:30.829Z] 6773.57 IOPS, 26.46 MiB/s [2024-11-05T09:41:30.829Z] 6813.63 IOPS, 26.62 MiB/s [2024-11-05T09:41:30.829Z] 6852.31 IOPS, 26.77 MiB/s [2024-11-05T09:41:30.829Z] 6889.42 IOPS, 26.91 MiB/s [2024-11-05T09:41:30.829Z] 6923.98 IOPS, 27.05 MiB/s [2024-11-05T09:41:30.829Z] 6957.49 IOPS, 27.18 MiB/s [2024-11-05T09:41:30.829Z] [2024-11-05 09:41:19.752541] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:18:44.871 6988.72 IOPS, 27.30 MiB/s [2024-11-05T09:41:30.829Z] 7018.64 IOPS, 27.42 MiB/s [2024-11-05T09:41:30.829Z] 7047.58 IOPS, 27.53 MiB/s [2024-11-05T09:41:30.829Z] 7075.51 IOPS, 27.64 MiB/s [2024-11-05T09:41:30.829Z] 7102.32 IOPS, 27.74 MiB/s [2024-11-05T09:41:30.829Z] 7125.10 IOPS, 27.83 MiB/s [2024-11-05T09:41:30.829Z] 7154.08 IOPS, 27.95 MiB/s [2024-11-05T09:41:30.829Z] 7181.51 IOPS, 28.05 MiB/s [2024-11-05T09:41:30.829Z] 7209.41 IOPS, 28.16 MiB/s [2024-11-05T09:41:30.829Z] 7233.96 IOPS, 28.26 MiB/s [2024-11-05T09:41:30.829Z] 7259.64 IOPS, 28.36 MiB/s [2024-11-05T09:41:30.829Z] Received shutdown signal, test time was about 56.019205 seconds 00:18:44.871 00:18:44.871 Latency(us) 00:18:44.871 [2024-11-05T09:41:30.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.872 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:44.872 Verification LBA range: start 0x0 length 0x4000 00:18:44.872 Nvme0n1 : 56.02 7257.41 28.35 0.00 0.00 17603.49 867.61 7046430.72 00:18:44.872 [2024-11-05T09:41:30.830Z] =================================================================================================================== 00:18:44.872 [2024-11-05T09:41:30.830Z] Total : 7257.41 28.35 0.00 0.00 17603.49 867.61 7046430.72 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:44.872 rmmod nvme_tcp 00:18:44.872 rmmod nvme_fabrics 00:18:44.872 rmmod nvme_keyring 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 80461 ']' 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 80461 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@952 -- # '[' -z 80461 ']' 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # kill -0 80461 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # uname 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80461 00:18:44.872 killing process with pid 80461 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80461' 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@971 -- # kill 80461 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@976 -- # wait 80461 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.872 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.131 ************************************ 00:18:45.131 END TEST nvmf_host_multipath 00:18:45.131 ************************************ 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:18:45.131 00:18:45.131 real 1m1.419s 00:18:45.131 user 2m50.901s 00:18:45.131 sys 0m18.262s 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:45.131 09:41:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:45.131 09:41:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:45.131 09:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:18:45.131 09:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:45.131 09:41:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:45.131 ************************************ 00:18:45.131 START TEST nvmf_timeout 00:18:45.131 ************************************ 00:18:45.131 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:45.391 * Looking for test storage... 00:18:45.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lcov --version 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:45.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.391 --rc genhtml_branch_coverage=1 00:18:45.391 --rc genhtml_function_coverage=1 00:18:45.391 --rc genhtml_legend=1 00:18:45.391 --rc geninfo_all_blocks=1 00:18:45.391 --rc geninfo_unexecuted_blocks=1 00:18:45.391 00:18:45.391 ' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:45.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.391 --rc genhtml_branch_coverage=1 00:18:45.391 --rc genhtml_function_coverage=1 00:18:45.391 --rc genhtml_legend=1 00:18:45.391 --rc geninfo_all_blocks=1 00:18:45.391 --rc geninfo_unexecuted_blocks=1 00:18:45.391 00:18:45.391 ' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:45.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.391 --rc genhtml_branch_coverage=1 00:18:45.391 --rc genhtml_function_coverage=1 00:18:45.391 --rc genhtml_legend=1 00:18:45.391 --rc geninfo_all_blocks=1 00:18:45.391 --rc geninfo_unexecuted_blocks=1 00:18:45.391 00:18:45.391 ' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:45.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.391 --rc genhtml_branch_coverage=1 00:18:45.391 --rc genhtml_function_coverage=1 00:18:45.391 --rc genhtml_legend=1 00:18:45.391 --rc geninfo_all_blocks=1 00:18:45.391 --rc geninfo_unexecuted_blocks=1 00:18:45.391 00:18:45.391 ' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.391 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:45.391 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:45.392 Cannot find device "nvmf_init_br" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:45.392 Cannot find device "nvmf_init_br2" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:45.392 Cannot find device "nvmf_tgt_br" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.392 Cannot find device "nvmf_tgt_br2" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:45.392 Cannot find device "nvmf_init_br" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:45.392 Cannot find device "nvmf_init_br2" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:45.392 Cannot find device "nvmf_tgt_br" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:45.392 Cannot find device "nvmf_tgt_br2" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:45.392 Cannot find device "nvmf_br" 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:18:45.392 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:45.650 Cannot find device "nvmf_init_if" 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:45.650 Cannot find device "nvmf_init_if2" 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.650 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.650 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.651 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.651 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:18:45.651 00:18:45.651 --- 10.0.0.3 ping statistics --- 00:18:45.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.651 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.651 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.651 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:18:45.651 00:18:45.651 --- 10.0.0.4 ping statistics --- 00:18:45.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.651 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:45.651 00:18:45.651 --- 10.0.0.1 ping statistics --- 00:18:45.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.651 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:45.651 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.910 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.910 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:18:45.910 00:18:45.910 --- 10.0.0.2 ping statistics --- 00:18:45.910 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.910 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=81673 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 81673 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81673 ']' 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:45.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:45.910 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:45.910 [2024-11-05 09:41:31.688990] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:18:45.910 [2024-11-05 09:41:31.689579] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.910 [2024-11-05 09:41:31.832388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:45.910 [2024-11-05 09:41:31.864004] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.910 [2024-11-05 09:41:31.864073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.910 [2024-11-05 09:41:31.864101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.910 [2024-11-05 09:41:31.864108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.910 [2024-11-05 09:41:31.864115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.910 [2024-11-05 09:41:31.866920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.169 [2024-11-05 09:41:31.866938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.169 [2024-11-05 09:41:31.901916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:46.169 09:41:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:46.428 [2024-11-05 09:41:32.278056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.428 09:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:46.687 Malloc0 00:18:46.687 09:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:46.945 09:41:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:47.204 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:47.463 [2024-11-05 09:41:33.348798] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=81719 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 81719 /var/tmp/bdevperf.sock 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81719 ']' 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.463 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:47.721 [2024-11-05 09:41:33.423752] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:18:47.721 [2024-11-05 09:41:33.423859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81719 ] 00:18:47.722 [2024-11-05 09:41:33.567571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.722 [2024-11-05 09:41:33.599851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:47.722 [2024-11-05 09:41:33.629679] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:47.722 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:47.722 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:18:47.722 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:47.980 09:41:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:48.547 NVMe0n1 00:18:48.547 09:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=81731 00:18:48.547 09:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:48.547 09:41:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:18:48.547 Running I/O for 10 seconds... 00:18:49.483 09:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:49.745 6932.00 IOPS, 27.08 MiB/s [2024-11-05T09:41:35.703Z] [2024-11-05 09:41:35.596916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.745 [2024-11-05 09:41:35.596984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with [2024-11-05 09:41:35.597140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:18:49.746 id:0 cdw10:00000000 cdw11:00000000 00:18:49.746 [2024-11-05 09:41:35.597160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.746 [2024-11-05 09:41:35.597177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.746 [2024-11-05 09:41:35.597187] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.746 [2024-11-05 09:41:35.597195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with [2024-11-05 09:41:35.597204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:18:49.746 id:0 cdw10:00000000 cdw11:00000000 00:18:49.746 [2024-11-05 09:41:35.597213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.746 [2024-11-05 09:41:35.597221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:49.746 [2024-11-05 09:41:35.597229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.746 [2024-11-05 09:41:35.597238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e50 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.746 [2024-11-05 09:41:35.597573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597797] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.597931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba0b30 is same with the state(6) to be set 00:18:49.747 [2024-11-05 09:41:35.598825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:65824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.598871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.598894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:65832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.598904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.598917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.598927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.598938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:65848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.598948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.598959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:65856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.598969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.598982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.598992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:65880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:65888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:65912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:65920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:65928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:65936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:65952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:65960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.599955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.747 [2024-11-05 09:41:35.599977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.747 [2024-11-05 09:41:35.600199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:65976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:65992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:66000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:66008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:66032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:66040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:66048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:66056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:66064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:66072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.600987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:66088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.600996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.601019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.601130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.601148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:66104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.601435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.601449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:66112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.601459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.601472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.601481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.601493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:66128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.601502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.601513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.601524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.601666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.601954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:66152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:66160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:66168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:66176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:66192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:66200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:66208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:66216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:66224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:66232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.602943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:66240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.602956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.603102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:66248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.603243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.603362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:66256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.603382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.603395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:66264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.603650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.603675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:66272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.748 [2024-11-05 09:41:35.603686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.748 [2024-11-05 09:41:35.603697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:66280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.603707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.603718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:66288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.603728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.603739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:66296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.603748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:66304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:66320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:66336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:66344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:66352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.604908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:66360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.604917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:66368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:66376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:66384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:66392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:66400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:66416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:66424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:66432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:66440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.605942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.605954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:66456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:66464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:66472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:66480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:66488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:66512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.606979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.606991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:66520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:66528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:66536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:66544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:66552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:66560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:66568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:66576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:66584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.749 [2024-11-05 09:41:35.607901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.749 [2024-11-05 09:41:35.607923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.607934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:66600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:66608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:66616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:66624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:66632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:66640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:66648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:66656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.608898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:66664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.608907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:66672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.609213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:66680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.609234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:66688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.609255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:66696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.609514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:66704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.609534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:66728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.609555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.609699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:66744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.609951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.609975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.609986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.609995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.610007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:66768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.610130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.610152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:66776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.610412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.610429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:66784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.610439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.610450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.610460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.610703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:66800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.610722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.610735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:66808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.610744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.610755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:66816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.610992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.611007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:66824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.611018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.611029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.611038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.611049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:66840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:49.750 [2024-11-05 09:41:35.611169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.611191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:66712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:49.750 [2024-11-05 09:41:35.611315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.611336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1193280 is same with the state(6) to be set 00:18:49.750 [2024-11-05 09:41:35.611457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:49.750 [2024-11-05 09:41:35.611473] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:49.750 [2024-11-05 09:41:35.611482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66720 len:8 PRP1 0x0 PRP2 0x0 00:18:49.750 [2024-11-05 09:41:35.611740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:49.750 [2024-11-05 09:41:35.612018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125e50 (9): Bad file descriptor 00:18:49.751 [2024-11-05 09:41:35.612359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:49.751 [2024-11-05 09:41:35.612555] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:49.751 [2024-11-05 09:41:35.612588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1125e50 with addr=10.0.0.3, port=4420 00:18:49.751 [2024-11-05 09:41:35.612832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e50 is same with the state(6) to be set 00:18:49.751 [2024-11-05 09:41:35.612880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125e50 (9): Bad file descriptor 00:18:49.751 [2024-11-05 09:41:35.612898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:49.751 [2024-11-05 09:41:35.612908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:49.751 [2024-11-05 09:41:35.612918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:49.751 [2024-11-05 09:41:35.612928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:49.751 [2024-11-05 09:41:35.613304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:49.751 09:41:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:18:51.623 4114.00 IOPS, 16.07 MiB/s [2024-11-05T09:41:37.839Z] 2742.67 IOPS, 10.71 MiB/s [2024-11-05T09:41:37.839Z] [2024-11-05 09:41:37.613473] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:51.881 [2024-11-05 09:41:37.613546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1125e50 with addr=10.0.0.3, port=4420 00:18:51.881 [2024-11-05 09:41:37.613563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e50 is same with the state(6) to be set 00:18:51.881 [2024-11-05 09:41:37.613589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125e50 (9): Bad file descriptor 00:18:51.881 [2024-11-05 09:41:37.613609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:51.882 [2024-11-05 09:41:37.613620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:51.882 [2024-11-05 09:41:37.613632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:51.882 [2024-11-05 09:41:37.613644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:51.882 [2024-11-05 09:41:37.613655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:51.882 09:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:18:51.882 09:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:51.882 09:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:52.140 09:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:52.140 09:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:18:52.140 09:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:52.140 09:41:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:52.399 09:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:52.399 09:41:38 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:18:53.594 2057.00 IOPS, 8.04 MiB/s [2024-11-05T09:41:39.816Z] 1645.60 IOPS, 6.43 MiB/s [2024-11-05T09:41:39.816Z] [2024-11-05 09:41:39.613776] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:53.858 [2024-11-05 09:41:39.613866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1125e50 with addr=10.0.0.3, port=4420 00:18:53.858 [2024-11-05 09:41:39.613888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1125e50 is same with the state(6) to be set 00:18:53.859 [2024-11-05 09:41:39.613913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1125e50 (9): Bad file descriptor 00:18:53.859 [2024-11-05 09:41:39.613933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:53.859 [2024-11-05 09:41:39.613943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:53.859 [2024-11-05 09:41:39.613955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:53.859 [2024-11-05 09:41:39.613967] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:53.859 [2024-11-05 09:41:39.613978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:55.735 1371.33 IOPS, 5.36 MiB/s [2024-11-05T09:41:41.693Z] 1175.43 IOPS, 4.59 MiB/s [2024-11-05T09:41:41.693Z] [2024-11-05 09:41:41.614008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:18:55.735 [2024-11-05 09:41:41.614053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:18:55.735 [2024-11-05 09:41:41.614065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:18:55.735 [2024-11-05 09:41:41.614077] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:18:55.735 [2024-11-05 09:41:41.614088] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:18:56.670 1028.50 IOPS, 4.02 MiB/s 00:18:56.670 Latency(us) 00:18:56.670 [2024-11-05T09:41:42.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:56.670 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:56.670 Verification LBA range: start 0x0 length 0x4000 00:18:56.670 NVMe0n1 : 8.22 1001.27 3.91 15.58 0.00 125915.87 4170.47 7046430.72 00:18:56.670 [2024-11-05T09:41:42.628Z] =================================================================================================================== 00:18:56.670 [2024-11-05T09:41:42.628Z] Total : 1001.27 3.91 15.58 0.00 125915.87 4170.47 7046430.72 00:18:56.670 { 00:18:56.670 "results": [ 00:18:56.670 { 00:18:56.670 "job": "NVMe0n1", 00:18:56.670 "core_mask": "0x4", 00:18:56.670 "workload": "verify", 00:18:56.670 "status": "finished", 00:18:56.670 "verify_range": { 00:18:56.670 "start": 0, 00:18:56.670 "length": 16384 00:18:56.670 }, 00:18:56.670 "queue_depth": 128, 00:18:56.670 "io_size": 4096, 00:18:56.670 "runtime": 8.217554, 00:18:56.670 "iops": 1001.2711811811641, 00:18:56.670 "mibps": 3.9112155514889224, 00:18:56.670 "io_failed": 128, 00:18:56.670 "io_timeout": 0, 00:18:56.670 "avg_latency_us": 125915.87234953654, 00:18:56.670 "min_latency_us": 4170.472727272727, 00:18:56.670 "max_latency_us": 7046430.72 00:18:56.670 } 00:18:56.670 ], 00:18:56.670 "core_count": 1 00:18:56.670 } 00:18:57.238 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:18:57.238 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:57.238 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:57.806 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:57.806 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:18:57.806 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:57.806 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 81731 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 81719 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81719 ']' 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81719 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81719 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:18:58.065 killing process with pid 81719 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81719' 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81719 00:18:58.065 Received shutdown signal, test time was about 9.421196 seconds 00:18:58.065 00:18:58.065 Latency(us) 00:18:58.065 [2024-11-05T09:41:44.023Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:58.065 [2024-11-05T09:41:44.023Z] =================================================================================================================== 00:18:58.065 [2024-11-05T09:41:44.023Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81719 00:18:58.065 09:41:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:58.325 [2024-11-05 09:41:44.181597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=81858 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 81858 /var/tmp/bdevperf.sock 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 81858 ']' 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:58.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:58.325 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:18:58.325 [2024-11-05 09:41:44.252566] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:18:58.325 [2024-11-05 09:41:44.252651] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81858 ] 00:18:58.584 [2024-11-05 09:41:44.397920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:58.584 [2024-11-05 09:41:44.430634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.584 [2024-11-05 09:41:44.460168] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:58.584 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:58.584 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:18:58.584 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:58.843 09:41:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:59.411 NVMe0n1 00:18:59.411 09:41:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=81870 00:18:59.411 09:41:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:59.411 09:41:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:18:59.411 Running I/O for 10 seconds... 00:19:00.349 09:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:00.611 6805.00 IOPS, 26.58 MiB/s [2024-11-05T09:41:46.569Z] [2024-11-05 09:41:46.389467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.611 [2024-11-05 09:41:46.390140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.611 [2024-11-05 09:41:46.390285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.611 [2024-11-05 09:41:46.390657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.611 [2024-11-05 09:41:46.390772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.611 [2024-11-05 09:41:46.391123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.611 [2024-11-05 09:41:46.391309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.611 [2024-11-05 09:41:46.391628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.391731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.392059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.392473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.392594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.392938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.393061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.393375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.393488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.393801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.393924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.394261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.394369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.394696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.394799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.394976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.395313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.395413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.395709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.395812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.396205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.396314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.396700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.396798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.397108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.397276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.397597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.397698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.398010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.398112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.398272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.398596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.398710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.399060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.399172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.399489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.399597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.399929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.399965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.399979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.399991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:63024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.400933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.400944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:63032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.401225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.401242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.401252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.401265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.401274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.401285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.612 [2024-11-05 09:41:46.401391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.612 [2024-11-05 09:41:46.401412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:63088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:63104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:63120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:63136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:63160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:63168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.401984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:63232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.401994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:63248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:63272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:63288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:63328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.613 [2024-11-05 09:41:46.402301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.613 [2024-11-05 09:41:46.402311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:63368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:63376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:63400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:63408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:63424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:63432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.402983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.402994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.403004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.403017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.403027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.403039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.403049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.403060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.614 [2024-11-05 09:41:46.403070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.614 [2024-11-05 09:41:46.403082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.614 [2024-11-05 09:41:46.403091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:62664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:62712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:62720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:00.615 [2024-11-05 09:41:46.403389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:00.615 [2024-11-05 09:41:46.403410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100f280 is same with the state(6) to be set 00:19:00.615 [2024-11-05 09:41:46.403436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:00.615 [2024-11-05 09:41:46.403444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:00.615 [2024-11-05 09:41:46.403453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:63648 len:8 PRP1 0x0 PRP2 0x0 00:19:00.615 [2024-11-05 09:41:46.403462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.615 [2024-11-05 09:41:46.403607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.615 [2024-11-05 09:41:46.403628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.615 [2024-11-05 09:41:46.403647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:00.615 [2024-11-05 09:41:46.403666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:00.615 [2024-11-05 09:41:46.403675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:00.615 [2024-11-05 09:41:46.403936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:00.615 [2024-11-05 09:41:46.403961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:00.615 [2024-11-05 09:41:46.404052] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:00.615 [2024-11-05 09:41:46.404074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1e50 with addr=10.0.0.3, port=4420 00:19:00.615 [2024-11-05 09:41:46.404085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:00.615 [2024-11-05 09:41:46.404104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:00.615 [2024-11-05 09:41:46.404120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:00.615 [2024-11-05 09:41:46.404129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:00.615 [2024-11-05 09:41:46.404140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:00.615 [2024-11-05 09:41:46.404151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:00.615 [2024-11-05 09:41:46.404161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:00.615 09:41:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:19:01.553 3914.50 IOPS, 15.29 MiB/s [2024-11-05T09:41:47.511Z] [2024-11-05 09:41:47.404295] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:01.553 [2024-11-05 09:41:47.404367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1e50 with addr=10.0.0.3, port=4420 00:19:01.553 [2024-11-05 09:41:47.404383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:01.553 [2024-11-05 09:41:47.404407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:01.553 [2024-11-05 09:41:47.404427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:19:01.553 [2024-11-05 09:41:47.404437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:19:01.553 [2024-11-05 09:41:47.404449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:19:01.553 [2024-11-05 09:41:47.404460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:19:01.553 [2024-11-05 09:41:47.404472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:19:01.553 09:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:01.812 [2024-11-05 09:41:47.658052] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:01.812 09:41:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 81870 00:19:02.637 2609.67 IOPS, 10.19 MiB/s [2024-11-05T09:41:48.595Z] [2024-11-05 09:41:48.421826] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:19:04.517 1957.25 IOPS, 7.65 MiB/s [2024-11-05T09:41:51.410Z] 3070.20 IOPS, 11.99 MiB/s [2024-11-05T09:41:52.343Z] 4089.67 IOPS, 15.98 MiB/s [2024-11-05T09:41:53.289Z] 4822.00 IOPS, 18.84 MiB/s [2024-11-05T09:41:54.676Z] 5363.38 IOPS, 20.95 MiB/s [2024-11-05T09:41:55.242Z] 5779.22 IOPS, 22.58 MiB/s [2024-11-05T09:41:55.501Z] 6124.80 IOPS, 23.93 MiB/s 00:19:09.543 Latency(us) 00:19:09.543 [2024-11-05T09:41:55.501Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.543 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.543 Verification LBA range: start 0x0 length 0x4000 00:19:09.543 NVMe0n1 : 10.01 6129.81 23.94 0.00 0.00 20845.21 1742.66 3035150.89 00:19:09.543 [2024-11-05T09:41:55.501Z] =================================================================================================================== 00:19:09.543 [2024-11-05T09:41:55.501Z] Total : 6129.81 23.94 0.00 0.00 20845.21 1742.66 3035150.89 00:19:09.543 { 00:19:09.543 "results": [ 00:19:09.543 { 00:19:09.543 "job": "NVMe0n1", 00:19:09.543 "core_mask": "0x4", 00:19:09.543 "workload": "verify", 00:19:09.543 "status": "finished", 00:19:09.543 "verify_range": { 00:19:09.543 "start": 0, 00:19:09.543 "length": 16384 00:19:09.543 }, 00:19:09.543 "queue_depth": 128, 00:19:09.543 "io_size": 4096, 00:19:09.543 "runtime": 10.009615, 00:19:09.543 "iops": 6129.80619134702, 00:19:09.543 "mibps": 23.944555434949297, 00:19:09.543 "io_failed": 0, 00:19:09.543 "io_timeout": 0, 00:19:09.543 "avg_latency_us": 20845.21246060685, 00:19:09.543 "min_latency_us": 1742.6618181818183, 00:19:09.543 "max_latency_us": 3035150.8945454545 00:19:09.543 } 00:19:09.543 ], 00:19:09.543 "core_count": 1 00:19:09.543 } 00:19:09.543 09:41:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=81980 00:19:09.543 09:41:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:09.543 09:41:55 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:19:09.543 Running I/O for 10 seconds... 00:19:10.477 09:41:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:10.738 6804.00 IOPS, 26.58 MiB/s [2024-11-05T09:41:56.696Z] [2024-11-05 09:41:56.576307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:64664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.738 [2024-11-05 09:41:56.577221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.577359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:63776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.577493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.578038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:63784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.578142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.578486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.578610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.578974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:63800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.579083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.579423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:63808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.579532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.579874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:63816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.579912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.579926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:63824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.579937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.579949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:64672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:10.738 [2024-11-05 09:41:56.579959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.738 [2024-11-05 09:41:56.579971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:63832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.738 [2024-11-05 09:41:56.579981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.579992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:63848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:63856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:63864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:63872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:63880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:63888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:63904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.580620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.580636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:63912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:63920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:63928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:63936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:63952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:63960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:63968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:63976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:63992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:64000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:64008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:64016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:64024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:64032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:64040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:64048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.581913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:64080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.581923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:64096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:64120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:64128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:64136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.739 [2024-11-05 09:41:56.582472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.739 [2024-11-05 09:41:56.582481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:64168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:64184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:64192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:64200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:64208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:64216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:64224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:64256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:64264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:64272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:64280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.582985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.582997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:64352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:64384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:64392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:64400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:64424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:64432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:64440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:64464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.740 [2024-11-05 09:41:56.583324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.740 [2024-11-05 09:41:56.583336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:64480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:64488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:64520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:64528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:64552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:64568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:64576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:64608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:64616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:10.741 [2024-11-05 09:41:56.583825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1010350 is same with the state(6) to be set 00:19:10.741 [2024-11-05 09:41:56.583860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.583868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.583877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64656 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.583886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.583904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.583912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64680 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.583922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.583939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.583948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64688 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.583958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.583968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.583975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.583983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64696 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.583992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.584002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.584009] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.584017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64704 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.584026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.584036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.584044] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.584053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64712 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.584062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.584071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.584079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.584087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64720 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.584096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.584106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.584114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.584122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64728 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.584130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.584140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.584147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.584155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64736 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.584164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.584174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.741 [2024-11-05 09:41:56.584181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.741 [2024-11-05 09:41:56.584189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64744 len:8 PRP1 0x0 PRP2 0x0 00:19:10.741 [2024-11-05 09:41:56.584205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.741 [2024-11-05 09:41:56.584216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.742 [2024-11-05 09:41:56.584223] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.742 [2024-11-05 09:41:56.584231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64752 len:8 PRP1 0x0 PRP2 0x0 00:19:10.742 [2024-11-05 09:41:56.584240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.742 [2024-11-05 09:41:56.584258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.742 [2024-11-05 09:41:56.584266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64760 len:8 PRP1 0x0 PRP2 0x0 00:19:10.742 [2024-11-05 09:41:56.584275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584284] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.742 [2024-11-05 09:41:56.584291] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.742 [2024-11-05 09:41:56.584299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64768 len:8 PRP1 0x0 PRP2 0x0 00:19:10.742 [2024-11-05 09:41:56.584309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.742 [2024-11-05 09:41:56.584329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.742 [2024-11-05 09:41:56.584337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64776 len:8 PRP1 0x0 PRP2 0x0 00:19:10.742 [2024-11-05 09:41:56.584346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.742 [2024-11-05 09:41:56.584364] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.742 [2024-11-05 09:41:56.584372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64784 len:8 PRP1 0x0 PRP2 0x0 00:19:10.742 [2024-11-05 09:41:56.584380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:10.742 [2024-11-05 09:41:56.584397] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:10.742 [2024-11-05 09:41:56.584406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64792 len:8 PRP1 0x0 PRP2 0x0 00:19:10.742 [2024-11-05 09:41:56.584415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.742 [2024-11-05 09:41:56.584543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.742 [2024-11-05 09:41:56.584563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.742 [2024-11-05 09:41:56.584583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:10.742 [2024-11-05 09:41:56.584605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:10.742 [2024-11-05 09:41:56.584615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:10.742 [2024-11-05 09:41:56.584870] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:10.742 [2024-11-05 09:41:56.584895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:10.742 [2024-11-05 09:41:56.584990] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:10.742 [2024-11-05 09:41:56.585024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1e50 with addr=10.0.0.3, port=4420 00:19:10.742 [2024-11-05 09:41:56.585036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:10.742 [2024-11-05 09:41:56.585055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:10.742 [2024-11-05 09:41:56.585071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:10.742 [2024-11-05 09:41:56.585080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:10.742 [2024-11-05 09:41:56.585091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:10.742 [2024-11-05 09:41:56.585101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:10.742 [2024-11-05 09:41:56.585111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:10.742 09:41:56 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:19:11.677 3986.00 IOPS, 15.57 MiB/s [2024-11-05T09:41:57.635Z] [2024-11-05 09:41:57.585228] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:11.677 [2024-11-05 09:41:57.585294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1e50 with addr=10.0.0.3, port=4420 00:19:11.677 [2024-11-05 09:41:57.585310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:11.677 [2024-11-05 09:41:57.585335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:11.677 [2024-11-05 09:41:57.585354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:11.677 [2024-11-05 09:41:57.585364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:11.677 [2024-11-05 09:41:57.585376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:11.677 [2024-11-05 09:41:57.585387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:11.677 [2024-11-05 09:41:57.585398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:12.869 2657.33 IOPS, 10.38 MiB/s [2024-11-05T09:41:58.827Z] [2024-11-05 09:41:58.585517] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:12.869 [2024-11-05 09:41:58.585571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1e50 with addr=10.0.0.3, port=4420 00:19:12.869 [2024-11-05 09:41:58.585587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:12.869 [2024-11-05 09:41:58.585612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:12.869 [2024-11-05 09:41:58.585632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:12.869 [2024-11-05 09:41:58.585642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:12.869 [2024-11-05 09:41:58.585653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:12.869 [2024-11-05 09:41:58.585665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:12.869 [2024-11-05 09:41:58.585677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:13.805 1993.00 IOPS, 7.79 MiB/s [2024-11-05T09:41:59.763Z] [2024-11-05 09:41:59.589456] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:13.805 [2024-11-05 09:41:59.589546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfa1e50 with addr=10.0.0.3, port=4420 00:19:13.805 [2024-11-05 09:41:59.589562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfa1e50 is same with the state(6) to be set 00:19:13.805 [2024-11-05 09:41:59.590054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfa1e50 (9): Bad file descriptor 00:19:13.805 [2024-11-05 09:41:59.590493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:19:13.805 [2024-11-05 09:41:59.590535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:19:13.805 [2024-11-05 09:41:59.590549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:19:13.805 [2024-11-05 09:41:59.590561] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:19:13.805 [2024-11-05 09:41:59.590572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:19:13.805 09:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:14.063 [2024-11-05 09:41:59.913163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:14.063 09:41:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 81980 00:19:14.890 1594.40 IOPS, 6.23 MiB/s [2024-11-05T09:42:00.848Z] [2024-11-05 09:42:00.616821] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:19:16.834 2524.50 IOPS, 9.86 MiB/s [2024-11-05T09:42:03.727Z] 3477.00 IOPS, 13.58 MiB/s [2024-11-05T09:42:04.663Z] 4195.25 IOPS, 16.39 MiB/s [2024-11-05T09:42:05.599Z] 4762.33 IOPS, 18.60 MiB/s [2024-11-05T09:42:05.599Z] 5220.50 IOPS, 20.39 MiB/s 00:19:19.641 Latency(us) 00:19:19.641 [2024-11-05T09:42:05.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.641 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:19.641 Verification LBA range: start 0x0 length 0x4000 00:19:19.641 NVMe0n1 : 10.01 5226.08 20.41 3534.15 0.00 14572.24 848.99 3035150.89 00:19:19.641 [2024-11-05T09:42:05.599Z] =================================================================================================================== 00:19:19.641 [2024-11-05T09:42:05.599Z] Total : 5226.08 20.41 3534.15 0.00 14572.24 0.00 3035150.89 00:19:19.641 { 00:19:19.641 "results": [ 00:19:19.641 { 00:19:19.641 "job": "NVMe0n1", 00:19:19.641 "core_mask": "0x4", 00:19:19.641 "workload": "verify", 00:19:19.641 "status": "finished", 00:19:19.641 "verify_range": { 00:19:19.641 "start": 0, 00:19:19.641 "length": 16384 00:19:19.641 }, 00:19:19.641 "queue_depth": 128, 00:19:19.641 "io_size": 4096, 00:19:19.641 "runtime": 10.006935, 00:19:19.641 "iops": 5226.075716490614, 00:19:19.641 "mibps": 20.41435826754146, 00:19:19.641 "io_failed": 35366, 00:19:19.641 "io_timeout": 0, 00:19:19.641 "avg_latency_us": 14572.24366519305, 00:19:19.641 "min_latency_us": 848.9890909090909, 00:19:19.641 "max_latency_us": 3035150.8945454545 00:19:19.641 } 00:19:19.641 ], 00:19:19.641 "core_count": 1 00:19:19.641 } 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 81858 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81858 ']' 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81858 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81858 00:19:19.641 killing process with pid 81858 00:19:19.641 Received shutdown signal, test time was about 10.000000 seconds 00:19:19.641 00:19:19.641 Latency(us) 00:19:19.641 [2024-11-05T09:42:05.599Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:19.641 [2024-11-05T09:42:05.599Z] =================================================================================================================== 00:19:19.641 [2024-11-05T09:42:05.599Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81858' 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81858 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81858 00:19:19.641 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=82091 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 82091 /var/tmp/bdevperf.sock 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@833 -- # '[' -z 82091 ']' 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:19.641 09:42:05 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:19.899 [2024-11-05 09:42:05.645646] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:19:19.899 [2024-11-05 09:42:05.645746] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82091 ] 00:19:19.899 [2024-11-05 09:42:05.795544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.899 [2024-11-05 09:42:05.828605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:19.899 [2024-11-05 09:42:05.858472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:20.836 09:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:20.836 09:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@866 -- # return 0 00:19:20.836 09:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 82091 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:19:20.836 09:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=82107 00:19:20.836 09:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:19:21.097 09:42:06 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:19:21.356 NVMe0n1 00:19:21.356 09:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=82148 00:19:21.356 09:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:19:21.356 09:42:07 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:21.614 Running I/O for 10 seconds... 00:19:22.550 09:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:22.811 14605.00 IOPS, 57.05 MiB/s [2024-11-05T09:42:08.769Z] [2024-11-05 09:42:08.528240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:119752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:57432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:51088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:127016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:69968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.528939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.528949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.529249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.529264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.529274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.529287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.529297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.811 [2024-11-05 09:42:08.529308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.811 [2024-11-05 09:42:08.529317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:119320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.529942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.529951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:82792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:84056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:52856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:86648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:34040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:52800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:50936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:1672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.812 [2024-11-05 09:42:08.530819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:84688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.812 [2024-11-05 09:42:08.530830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.530855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:88464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.530866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.530878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.530887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.530899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.530908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.530919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.530929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.530940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.530949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.530961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.530970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.530982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.530991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:87496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:72552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:95048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:33224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:65672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.531821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.531832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:104352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:57584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:52480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:52024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.813 [2024-11-05 09:42:08.532613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:82552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.813 [2024-11-05 09:42:08.532622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:56728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:95416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.532986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.532998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:48184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:22.814 [2024-11-05 09:42:08.533407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.814 [2024-11-05 09:42:08.533417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd6140 is same with the state(6) to be set 00:19:22.815 [2024-11-05 09:42:08.533430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:22.815 [2024-11-05 09:42:08.533438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:22.815 [2024-11-05 09:42:08.533446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23888 len:8 PRP1 0x0 PRP2 0x0 00:19:22.815 [2024-11-05 09:42:08.533456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.815 [2024-11-05 09:42:08.533577] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.815 [2024-11-05 09:42:08.533595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.815 [2024-11-05 09:42:08.533605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.815 [2024-11-05 09:42:08.533614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.815 [2024-11-05 09:42:08.533624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.815 [2024-11-05 09:42:08.533632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.815 [2024-11-05 09:42:08.533642] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:22.815 [2024-11-05 09:42:08.533650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:22.815 [2024-11-05 09:42:08.533659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68e50 is same with the state(6) to be set 00:19:22.815 [2024-11-05 09:42:08.533946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:22.815 [2024-11-05 09:42:08.533977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68e50 (9): Bad file descriptor 00:19:22.815 [2024-11-05 09:42:08.534077] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:22.815 [2024-11-05 09:42:08.534101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf68e50 with addr=10.0.0.3, port=4420 00:19:22.815 [2024-11-05 09:42:08.534112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68e50 is same with the state(6) to be set 00:19:22.815 [2024-11-05 09:42:08.534131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68e50 (9): Bad file descriptor 00:19:22.815 [2024-11-05 09:42:08.534148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:22.815 [2024-11-05 09:42:08.534157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:22.815 [2024-11-05 09:42:08.534169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:22.815 [2024-11-05 09:42:08.534179] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:22.815 [2024-11-05 09:42:08.534190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:22.815 09:42:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 82148 00:19:24.683 8318.50 IOPS, 32.49 MiB/s [2024-11-05T09:42:10.641Z] 5545.67 IOPS, 21.66 MiB/s [2024-11-05T09:42:10.641Z] [2024-11-05 09:42:10.534599] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:24.683 [2024-11-05 09:42:10.535004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf68e50 with addr=10.0.0.3, port=4420 00:19:24.683 [2024-11-05 09:42:10.535436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68e50 is same with the state(6) to be set 00:19:24.683 [2024-11-05 09:42:10.535891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68e50 (9): Bad file descriptor 00:19:24.683 [2024-11-05 09:42:10.536318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:24.683 [2024-11-05 09:42:10.536721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:24.683 [2024-11-05 09:42:10.537170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:24.683 [2024-11-05 09:42:10.537407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:24.683 [2024-11-05 09:42:10.537847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:26.552 4159.25 IOPS, 16.25 MiB/s [2024-11-05T09:42:12.768Z] 3327.40 IOPS, 13.00 MiB/s [2024-11-05T09:42:12.768Z] [2024-11-05 09:42:12.538496] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:26.810 [2024-11-05 09:42:12.539276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf68e50 with addr=10.0.0.3, port=4420 00:19:26.810 [2024-11-05 09:42:12.539894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf68e50 is same with the state(6) to be set 00:19:26.810 [2024-11-05 09:42:12.540432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf68e50 (9): Bad file descriptor 00:19:26.810 [2024-11-05 09:42:12.540778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:26.810 [2024-11-05 09:42:12.540880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:26.810 [2024-11-05 09:42:12.541204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:26.810 [2024-11-05 09:42:12.541319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:26.810 [2024-11-05 09:42:12.541401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:19:28.676 2772.83 IOPS, 10.83 MiB/s [2024-11-05T09:42:14.634Z] 2376.71 IOPS, 9.28 MiB/s [2024-11-05T09:42:14.634Z] [2024-11-05 09:42:14.541630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:19:28.676 [2024-11-05 09:42:14.542308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:19:28.677 [2024-11-05 09:42:14.542635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:19:28.677 [2024-11-05 09:42:14.542733] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:19:28.677 [2024-11-05 09:42:14.542828] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:19:29.611 2079.62 IOPS, 8.12 MiB/s 00:19:29.611 Latency(us) 00:19:29.611 [2024-11-05T09:42:15.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.611 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:29.611 NVMe0n1 : 8.14 2042.95 7.98 15.72 0.00 62106.05 8162.21 7046430.72 00:19:29.611 [2024-11-05T09:42:15.569Z] =================================================================================================================== 00:19:29.611 [2024-11-05T09:42:15.569Z] Total : 2042.95 7.98 15.72 0.00 62106.05 8162.21 7046430.72 00:19:29.611 { 00:19:29.611 "results": [ 00:19:29.611 { 00:19:29.611 "job": "NVMe0n1", 00:19:29.611 "core_mask": "0x4", 00:19:29.611 "workload": "randread", 00:19:29.611 "status": "finished", 00:19:29.611 "queue_depth": 128, 00:19:29.611 "io_size": 4096, 00:19:29.611 "runtime": 8.143598, 00:19:29.611 "iops": 2042.9544778610143, 00:19:29.611 "mibps": 7.980290929144587, 00:19:29.611 "io_failed": 128, 00:19:29.611 "io_timeout": 0, 00:19:29.611 "avg_latency_us": 62106.048713174096, 00:19:29.611 "min_latency_us": 8162.210909090909, 00:19:29.611 "max_latency_us": 7046430.72 00:19:29.611 } 00:19:29.611 ], 00:19:29.611 "core_count": 1 00:19:29.611 } 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.611 Attaching 5 probes... 00:19:29.611 1399.021244: reset bdev controller NVMe0 00:19:29.611 1399.094457: reconnect bdev controller NVMe0 00:19:29.611 3399.477951: reconnect delay bdev controller NVMe0 00:19:29.611 3399.533813: reconnect bdev controller NVMe0 00:19:29.611 5403.465306: reconnect delay bdev controller NVMe0 00:19:29.611 5403.484969: reconnect bdev controller NVMe0 00:19:29.611 7406.670158: reconnect delay bdev controller NVMe0 00:19:29.611 7406.688958: reconnect bdev controller NVMe0 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 82107 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 82091 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 82091 ']' 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 82091 00:19:29.611 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:29.869 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:29.869 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82091 00:19:29.869 killing process with pid 82091 00:19:29.869 Received shutdown signal, test time was about 8.209400 seconds 00:19:29.869 00:19:29.869 Latency(us) 00:19:29.869 [2024-11-05T09:42:15.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.869 [2024-11-05T09:42:15.827Z] =================================================================================================================== 00:19:29.869 [2024-11-05T09:42:15.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:29.869 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:19:29.869 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:19:29.869 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82091' 00:19:29.869 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 82091 00:19:29.869 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 82091 00:19:29.870 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.128 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:30.128 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:19:30.128 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:30.128 09:42:15 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:19:30.128 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:30.128 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:19:30.128 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:30.128 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:30.128 rmmod nvme_tcp 00:19:30.128 rmmod nvme_fabrics 00:19:30.128 rmmod nvme_keyring 00:19:30.128 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:30.386 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:19:30.386 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:19:30.386 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 81673 ']' 00:19:30.386 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 81673 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@952 -- # '[' -z 81673 ']' 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # kill -0 81673 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # uname 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81673 00:19:30.387 killing process with pid 81673 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81673' 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@971 -- # kill 81673 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@976 -- # wait 81673 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:30.387 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.645 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:19:30.645 ************************************ 00:19:30.645 END TEST nvmf_timeout 00:19:30.645 ************************************ 00:19:30.646 00:19:30.646 real 0m45.496s 00:19:30.646 user 2m13.759s 00:19:30.646 sys 0m5.326s 00:19:30.646 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.646 09:42:16 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:30.646 09:42:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:19:30.646 09:42:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:30.646 ************************************ 00:19:30.646 END TEST nvmf_host 00:19:30.646 ************************************ 00:19:30.646 00:19:30.646 real 5m1.873s 00:19:30.646 user 13m11.482s 00:19:30.646 sys 1m7.025s 00:19:30.646 09:42:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.646 09:42:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:30.646 09:42:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:19:30.646 09:42:16 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:19:30.905 ************************************ 00:19:30.905 END TEST nvmf_tcp 00:19:30.905 ************************************ 00:19:30.905 00:19:30.905 real 12m41.507s 00:19:30.905 user 30m39.314s 00:19:30.905 sys 3m6.240s 00:19:30.905 09:42:16 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:30.905 09:42:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:30.905 09:42:16 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:19:30.905 09:42:16 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:30.905 09:42:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:30.905 09:42:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:30.905 09:42:16 -- common/autotest_common.sh@10 -- # set +x 00:19:30.905 ************************************ 00:19:30.905 START TEST nvmf_dif 00:19:30.905 ************************************ 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:30.905 * Looking for test storage... 00:19:30.905 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.905 --rc genhtml_branch_coverage=1 00:19:30.905 --rc genhtml_function_coverage=1 00:19:30.905 --rc genhtml_legend=1 00:19:30.905 --rc geninfo_all_blocks=1 00:19:30.905 --rc geninfo_unexecuted_blocks=1 00:19:30.905 00:19:30.905 ' 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.905 --rc genhtml_branch_coverage=1 00:19:30.905 --rc genhtml_function_coverage=1 00:19:30.905 --rc genhtml_legend=1 00:19:30.905 --rc geninfo_all_blocks=1 00:19:30.905 --rc geninfo_unexecuted_blocks=1 00:19:30.905 00:19:30.905 ' 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.905 --rc genhtml_branch_coverage=1 00:19:30.905 --rc genhtml_function_coverage=1 00:19:30.905 --rc genhtml_legend=1 00:19:30.905 --rc geninfo_all_blocks=1 00:19:30.905 --rc geninfo_unexecuted_blocks=1 00:19:30.905 00:19:30.905 ' 00:19:30.905 09:42:16 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:30.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:30.905 --rc genhtml_branch_coverage=1 00:19:30.905 --rc genhtml_function_coverage=1 00:19:30.905 --rc genhtml_legend=1 00:19:30.905 --rc geninfo_all_blocks=1 00:19:30.905 --rc geninfo_unexecuted_blocks=1 00:19:30.905 00:19:30.905 ' 00:19:30.905 09:42:16 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:30.905 09:42:16 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:30.905 09:42:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.905 09:42:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.905 09:42:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.905 09:42:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:19:30.905 09:42:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:30.905 09:42:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:30.906 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:30.906 09:42:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:19:30.906 09:42:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:30.906 09:42:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:30.906 09:42:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:19:30.906 09:42:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:30.906 09:42:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:30.906 09:42:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:30.906 09:42:16 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:31.164 Cannot find device "nvmf_init_br" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@162 -- # true 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:31.164 Cannot find device "nvmf_init_br2" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@163 -- # true 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:31.164 Cannot find device "nvmf_tgt_br" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@164 -- # true 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:31.164 Cannot find device "nvmf_tgt_br2" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@165 -- # true 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:31.164 Cannot find device "nvmf_init_br" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@166 -- # true 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:31.164 Cannot find device "nvmf_init_br2" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@167 -- # true 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:31.164 Cannot find device "nvmf_tgt_br" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@168 -- # true 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:31.164 Cannot find device "nvmf_tgt_br2" 00:19:31.164 09:42:16 nvmf_dif -- nvmf/common.sh@169 -- # true 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:31.165 Cannot find device "nvmf_br" 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@170 -- # true 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:31.165 Cannot find device "nvmf_init_if" 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@171 -- # true 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:31.165 Cannot find device "nvmf_init_if2" 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@172 -- # true 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:31.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@173 -- # true 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:31.165 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@174 -- # true 00:19:31.165 09:42:16 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:31.165 09:42:17 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:31.423 09:42:17 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:31.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:31.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:19:31.424 00:19:31.424 --- 10.0.0.3 ping statistics --- 00:19:31.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.424 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:31.424 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:31.424 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:19:31.424 00:19:31.424 --- 10.0.0.4 ping statistics --- 00:19:31.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.424 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:31.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:31.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:31.424 00:19:31.424 --- 10.0.0.1 ping statistics --- 00:19:31.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.424 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:31.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:31.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:19:31.424 00:19:31.424 --- 10.0.0.2 ping statistics --- 00:19:31.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:31.424 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:19:31.424 09:42:17 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:31.683 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:31.683 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:31.683 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:31.941 09:42:17 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:31.941 09:42:17 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=82638 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:31.941 09:42:17 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 82638 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 82638 ']' 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:31.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:31.941 09:42:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:31.941 [2024-11-05 09:42:17.734730] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:19:31.941 [2024-11-05 09:42:17.734832] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.941 [2024-11-05 09:42:17.887171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.200 [2024-11-05 09:42:17.924789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.200 [2024-11-05 09:42:17.924875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.200 [2024-11-05 09:42:17.924889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.200 [2024-11-05 09:42:17.924899] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.200 [2024-11-05 09:42:17.924908] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.200 [2024-11-05 09:42:17.925291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.200 [2024-11-05 09:42:17.957871] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:19:32.200 09:42:18 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 09:42:18 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.200 09:42:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:19:32.200 09:42:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 [2024-11-05 09:42:18.052136] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.200 09:42:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:32.200 09:42:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 ************************************ 00:19:32.200 START TEST fio_dif_1_default 00:19:32.200 ************************************ 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 bdev_null0 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:32.200 [2024-11-05 09:42:18.096302] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.200 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:32.200 { 00:19:32.200 "params": { 00:19:32.200 "name": "Nvme$subsystem", 00:19:32.200 "trtype": "$TEST_TRANSPORT", 00:19:32.200 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:32.201 "adrfam": "ipv4", 00:19:32.201 "trsvcid": "$NVMF_PORT", 00:19:32.201 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:32.201 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:32.201 "hdgst": ${hdgst:-false}, 00:19:32.201 "ddgst": ${ddgst:-false} 00:19:32.201 }, 00:19:32.201 "method": "bdev_nvme_attach_controller" 00:19:32.201 } 00:19:32.201 EOF 00:19:32.201 )") 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:32.201 "params": { 00:19:32.201 "name": "Nvme0", 00:19:32.201 "trtype": "tcp", 00:19:32.201 "traddr": "10.0.0.3", 00:19:32.201 "adrfam": "ipv4", 00:19:32.201 "trsvcid": "4420", 00:19:32.201 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:32.201 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:32.201 "hdgst": false, 00:19:32.201 "ddgst": false 00:19:32.201 }, 00:19:32.201 "method": "bdev_nvme_attach_controller" 00:19:32.201 }' 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:19:32.201 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:32.459 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:32.459 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:32.460 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:32.460 09:42:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:32.460 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:32.460 fio-3.35 00:19:32.460 Starting 1 thread 00:19:44.690 00:19:44.690 filename0: (groupid=0, jobs=1): err= 0: pid=82697: Tue Nov 5 09:42:28 2024 00:19:44.690 read: IOPS=8578, BW=33.5MiB/s (35.1MB/s)(335MiB/10001msec) 00:19:44.690 slat (usec): min=6, max=209, avg= 8.93, stdev= 3.46 00:19:44.690 clat (usec): min=366, max=3360, avg=440.07, stdev=39.19 00:19:44.690 lat (usec): min=373, max=3388, avg=449.00, stdev=39.77 00:19:44.690 clat percentiles (usec): 00:19:44.690 | 1.00th=[ 375], 5.00th=[ 388], 10.00th=[ 400], 20.00th=[ 416], 00:19:44.690 | 30.00th=[ 424], 40.00th=[ 433], 50.00th=[ 441], 60.00th=[ 449], 00:19:44.690 | 70.00th=[ 457], 80.00th=[ 465], 90.00th=[ 478], 95.00th=[ 490], 00:19:44.690 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 562], 99.95th=[ 594], 00:19:44.690 | 99.99th=[ 824] 00:19:44.690 bw ( KiB/s): min=33216, max=34688, per=100.00%, avg=34323.79, stdev=309.73, samples=19 00:19:44.690 iops : min= 8304, max= 8672, avg=8580.95, stdev=77.50, samples=19 00:19:44.690 lat (usec) : 500=97.41%, 750=2.57%, 1000=0.01% 00:19:44.690 lat (msec) : 4=0.01% 00:19:44.690 cpu : usr=85.33%, sys=12.76%, ctx=36, majf=0, minf=9 00:19:44.690 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:44.690 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.690 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:44.690 issued rwts: total=85792,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:44.690 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:44.690 00:19:44.690 Run status group 0 (all jobs): 00:19:44.690 READ: bw=33.5MiB/s (35.1MB/s), 33.5MiB/s-33.5MiB/s (35.1MB/s-35.1MB/s), io=335MiB (351MB), run=10001-10001msec 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 00:19:44.690 real 0m10.946s 00:19:44.690 user 0m9.147s 00:19:44.690 sys 0m1.523s 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 ************************************ 00:19:44.690 END TEST fio_dif_1_default 00:19:44.690 ************************************ 00:19:44.690 09:42:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:44.690 09:42:29 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:44.690 09:42:29 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 ************************************ 00:19:44.690 START TEST fio_dif_1_multi_subsystems 00:19:44.690 ************************************ 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 bdev_null0 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 [2024-11-05 09:42:29.089108] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 bdev_null1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:44.690 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.691 { 00:19:44.691 "params": { 00:19:44.691 "name": "Nvme$subsystem", 00:19:44.691 "trtype": "$TEST_TRANSPORT", 00:19:44.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.691 "adrfam": "ipv4", 00:19:44.691 "trsvcid": "$NVMF_PORT", 00:19:44.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.691 "hdgst": ${hdgst:-false}, 00:19:44.691 "ddgst": ${ddgst:-false} 00:19:44.691 }, 00:19:44.691 "method": "bdev_nvme_attach_controller" 00:19:44.691 } 00:19:44.691 EOF 00:19:44.691 )") 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:44.691 { 00:19:44.691 "params": { 00:19:44.691 "name": "Nvme$subsystem", 00:19:44.691 "trtype": "$TEST_TRANSPORT", 00:19:44.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.691 "adrfam": "ipv4", 00:19:44.691 "trsvcid": "$NVMF_PORT", 00:19:44.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.691 "hdgst": ${hdgst:-false}, 00:19:44.691 "ddgst": ${ddgst:-false} 00:19:44.691 }, 00:19:44.691 "method": "bdev_nvme_attach_controller" 00:19:44.691 } 00:19:44.691 EOF 00:19:44.691 )") 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:44.691 "params": { 00:19:44.691 "name": "Nvme0", 00:19:44.691 "trtype": "tcp", 00:19:44.691 "traddr": "10.0.0.3", 00:19:44.691 "adrfam": "ipv4", 00:19:44.691 "trsvcid": "4420", 00:19:44.691 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:44.691 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:44.691 "hdgst": false, 00:19:44.691 "ddgst": false 00:19:44.691 }, 00:19:44.691 "method": "bdev_nvme_attach_controller" 00:19:44.691 },{ 00:19:44.691 "params": { 00:19:44.691 "name": "Nvme1", 00:19:44.691 "trtype": "tcp", 00:19:44.691 "traddr": "10.0.0.3", 00:19:44.691 "adrfam": "ipv4", 00:19:44.691 "trsvcid": "4420", 00:19:44.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.691 "hdgst": false, 00:19:44.691 "ddgst": false 00:19:44.691 }, 00:19:44.691 "method": "bdev_nvme_attach_controller" 00:19:44.691 }' 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:44.691 09:42:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:44.691 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:44.691 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:44.691 fio-3.35 00:19:44.691 Starting 2 threads 00:19:54.671 00:19:54.671 filename0: (groupid=0, jobs=1): err= 0: pid=82851: Tue Nov 5 09:42:39 2024 00:19:54.671 read: IOPS=4696, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:19:54.671 slat (usec): min=7, max=630, avg=13.69, stdev= 5.79 00:19:54.671 clat (usec): min=445, max=1676, avg=814.64, stdev=45.07 00:19:54.671 lat (usec): min=453, max=1700, avg=828.33, stdev=46.22 00:19:54.671 clat percentiles (usec): 00:19:54.671 | 1.00th=[ 709], 5.00th=[ 734], 10.00th=[ 750], 20.00th=[ 783], 00:19:54.671 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 832], 00:19:54.671 | 70.00th=[ 840], 80.00th=[ 848], 90.00th=[ 865], 95.00th=[ 881], 00:19:54.671 | 99.00th=[ 906], 99.50th=[ 922], 99.90th=[ 971], 99.95th=[ 1012], 00:19:54.671 | 99.99th=[ 1237] 00:19:54.671 bw ( KiB/s): min=18624, max=18944, per=50.02%, avg=18795.79, stdev=107.84, samples=19 00:19:54.671 iops : min= 4656, max= 4736, avg=4698.95, stdev=26.96, samples=19 00:19:54.671 lat (usec) : 500=0.02%, 750=9.16%, 1000=90.77% 00:19:54.671 lat (msec) : 2=0.06% 00:19:54.671 cpu : usr=89.14%, sys=9.36%, ctx=57, majf=0, minf=0 00:19:54.671 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.671 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.671 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.671 issued rwts: total=46968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.671 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:54.671 filename1: (groupid=0, jobs=1): err= 0: pid=82852: Tue Nov 5 09:42:39 2024 00:19:54.671 read: IOPS=4696, BW=18.3MiB/s (19.2MB/s)(183MiB/10001msec) 00:19:54.671 slat (usec): min=7, max=476, avg=13.79, stdev= 5.16 00:19:54.671 clat (usec): min=464, max=1335, avg=813.41, stdev=32.62 00:19:54.671 lat (usec): min=473, max=1362, avg=827.20, stdev=33.21 00:19:54.671 clat percentiles (usec): 00:19:54.671 | 1.00th=[ 742], 5.00th=[ 766], 10.00th=[ 775], 20.00th=[ 791], 00:19:54.671 | 30.00th=[ 799], 40.00th=[ 807], 50.00th=[ 816], 60.00th=[ 824], 00:19:54.671 | 70.00th=[ 832], 80.00th=[ 840], 90.00th=[ 857], 95.00th=[ 865], 00:19:54.671 | 99.00th=[ 889], 99.50th=[ 906], 99.90th=[ 947], 99.95th=[ 988], 00:19:54.671 | 99.99th=[ 1188] 00:19:54.672 bw ( KiB/s): min=18624, max=18944, per=50.03%, avg=18797.47, stdev=104.68, samples=19 00:19:54.672 iops : min= 4656, max= 4736, avg=4699.37, stdev=26.17, samples=19 00:19:54.672 lat (usec) : 500=0.02%, 750=2.01%, 1000=97.92% 00:19:54.672 lat (msec) : 2=0.04% 00:19:54.672 cpu : usr=89.83%, sys=8.68%, ctx=27, majf=0, minf=0 00:19:54.672 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:54.672 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.672 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:54.672 issued rwts: total=46972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:54.672 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:54.672 00:19:54.672 Run status group 0 (all jobs): 00:19:54.672 READ: bw=36.7MiB/s (38.5MB/s), 18.3MiB/s-18.3MiB/s (19.2MB/s-19.2MB/s), io=367MiB (385MB), run=10001-10001msec 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 00:19:54.672 real 0m11.042s 00:19:54.672 user 0m18.602s 00:19:54.672 sys 0m2.050s 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 ************************************ 00:19:54.672 END TEST fio_dif_1_multi_subsystems 00:19:54.672 ************************************ 00:19:54.672 09:42:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:54.672 09:42:40 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:19:54.672 09:42:40 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 ************************************ 00:19:54.672 START TEST fio_dif_rand_params 00:19:54.672 ************************************ 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 bdev_null0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:19:54.672 [2024-11-05 09:42:40.181080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:19:54.672 { 00:19:54.672 "params": { 00:19:54.672 "name": "Nvme$subsystem", 00:19:54.672 "trtype": "$TEST_TRANSPORT", 00:19:54.672 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:54.672 "adrfam": "ipv4", 00:19:54.672 "trsvcid": "$NVMF_PORT", 00:19:54.672 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:54.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:54.672 "hdgst": ${hdgst:-false}, 00:19:54.672 "ddgst": ${ddgst:-false} 00:19:54.672 }, 00:19:54.672 "method": "bdev_nvme_attach_controller" 00:19:54.672 } 00:19:54.672 EOF 00:19:54.672 )") 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:19:54.672 "params": { 00:19:54.672 "name": "Nvme0", 00:19:54.672 "trtype": "tcp", 00:19:54.672 "traddr": "10.0.0.3", 00:19:54.672 "adrfam": "ipv4", 00:19:54.672 "trsvcid": "4420", 00:19:54.672 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:54.672 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:54.672 "hdgst": false, 00:19:54.672 "ddgst": false 00:19:54.672 }, 00:19:54.672 "method": "bdev_nvme_attach_controller" 00:19:54.672 }' 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:19:54.672 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:54.673 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:19:54.673 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:19:54.673 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:19:54.673 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:19:54.673 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:54.673 09:42:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:54.673 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:54.673 ... 00:19:54.673 fio-3.35 00:19:54.673 Starting 3 threads 00:20:01.235 00:20:01.235 filename0: (groupid=0, jobs=1): err= 0: pid=83009: Tue Nov 5 09:42:45 2024 00:20:01.235 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(159MiB/5005msec) 00:20:01.235 slat (nsec): min=7586, max=36806, avg=10113.13, stdev=3207.73 00:20:01.235 clat (usec): min=10156, max=13433, avg=11750.44, stdev=144.43 00:20:01.235 lat (usec): min=10164, max=13470, avg=11760.55, stdev=144.92 00:20:01.235 clat percentiles (usec): 00:20:01.235 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:01.235 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:20:01.235 | 70.00th=[11863], 80.00th=[11863], 90.00th=[11863], 95.00th=[11863], 00:20:01.235 | 99.00th=[12125], 99.50th=[12125], 99.90th=[13435], 99.95th=[13435], 00:20:01.235 | 99.99th=[13435] 00:20:01.235 bw ( KiB/s): min=32256, max=33024, per=33.29%, avg=32563.20, stdev=396.59, samples=10 00:20:01.235 iops : min= 252, max= 258, avg=254.40, stdev= 3.10, samples=10 00:20:01.235 lat (msec) : 20=100.00% 00:20:01.235 cpu : usr=90.75%, sys=8.71%, ctx=8, majf=0, minf=0 00:20:01.235 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.235 issued rwts: total=1275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:01.235 filename0: (groupid=0, jobs=1): err= 0: pid=83010: Tue Nov 5 09:42:45 2024 00:20:01.235 read: IOPS=255, BW=31.9MiB/s (33.4MB/s)(160MiB/5009msec) 00:20:01.235 slat (nsec): min=7730, max=57193, avg=10507.34, stdev=3886.30 00:20:01.235 clat (usec): min=4507, max=12207, avg=11732.21, stdev=361.59 00:20:01.235 lat (usec): min=4514, max=12220, avg=11742.72, stdev=361.52 00:20:01.235 clat percentiles (usec): 00:20:01.235 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:01.235 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:20:01.235 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11863], 00:20:01.235 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12256], 99.95th=[12256], 00:20:01.235 | 99.99th=[12256] 00:20:01.235 bw ( KiB/s): min=32256, max=33024, per=33.37%, avg=32640.00, stdev=404.77, samples=10 00:20:01.235 iops : min= 252, max= 258, avg=255.00, stdev= 3.16, samples=10 00:20:01.235 lat (msec) : 10=0.23%, 20=99.77% 00:20:01.235 cpu : usr=89.76%, sys=9.66%, ctx=9, majf=0, minf=0 00:20:01.235 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.235 issued rwts: total=1278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:01.235 filename0: (groupid=0, jobs=1): err= 0: pid=83011: Tue Nov 5 09:42:45 2024 00:20:01.235 read: IOPS=254, BW=31.8MiB/s (33.4MB/s)(159MiB/5005msec) 00:20:01.235 slat (nsec): min=7522, max=36870, avg=10057.92, stdev=3079.51 00:20:01.235 clat (usec): min=8645, max=15015, avg=11750.95, stdev=237.08 00:20:01.235 lat (usec): min=8653, max=15041, avg=11761.01, stdev=237.38 00:20:01.235 clat percentiles (usec): 00:20:01.235 | 1.00th=[11600], 5.00th=[11600], 10.00th=[11600], 20.00th=[11600], 00:20:01.235 | 30.00th=[11731], 40.00th=[11731], 50.00th=[11731], 60.00th=[11731], 00:20:01.235 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11863], 95.00th=[11863], 00:20:01.235 | 99.00th=[11994], 99.50th=[12125], 99.90th=[15008], 99.95th=[15008], 00:20:01.235 | 99.99th=[15008] 00:20:01.235 bw ( KiB/s): min=32256, max=33024, per=33.29%, avg=32563.20, stdev=396.59, samples=10 00:20:01.235 iops : min= 252, max= 258, avg=254.40, stdev= 3.10, samples=10 00:20:01.235 lat (msec) : 10=0.24%, 20=99.76% 00:20:01.235 cpu : usr=90.11%, sys=9.35%, ctx=46, majf=0, minf=0 00:20:01.235 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:01.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.235 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:01.235 issued rwts: total=1275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:01.235 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:01.235 00:20:01.235 Run status group 0 (all jobs): 00:20:01.235 READ: bw=95.5MiB/s (100MB/s), 31.8MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=479MiB (502MB), run=5005-5009msec 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.235 bdev_null0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.235 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 [2024-11-05 09:42:46.125982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 bdev_null1 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 bdev_null2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.236 { 00:20:01.236 "params": { 00:20:01.236 "name": "Nvme$subsystem", 00:20:01.236 "trtype": "$TEST_TRANSPORT", 00:20:01.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.236 "adrfam": "ipv4", 00:20:01.236 "trsvcid": "$NVMF_PORT", 00:20:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.236 "hdgst": ${hdgst:-false}, 00:20:01.236 "ddgst": ${ddgst:-false} 00:20:01.236 }, 00:20:01.236 "method": "bdev_nvme_attach_controller" 00:20:01.236 } 00:20:01.236 EOF 00:20:01.236 )") 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.236 { 00:20:01.236 "params": { 00:20:01.236 "name": "Nvme$subsystem", 00:20:01.236 "trtype": "$TEST_TRANSPORT", 00:20:01.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.236 "adrfam": "ipv4", 00:20:01.236 "trsvcid": "$NVMF_PORT", 00:20:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.236 "hdgst": ${hdgst:-false}, 00:20:01.236 "ddgst": ${ddgst:-false} 00:20:01.236 }, 00:20:01.236 "method": "bdev_nvme_attach_controller" 00:20:01.236 } 00:20:01.236 EOF 00:20:01.236 )") 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:01.236 { 00:20:01.236 "params": { 00:20:01.236 "name": "Nvme$subsystem", 00:20:01.236 "trtype": "$TEST_TRANSPORT", 00:20:01.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:01.236 "adrfam": "ipv4", 00:20:01.236 "trsvcid": "$NVMF_PORT", 00:20:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:01.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:01.236 "hdgst": ${hdgst:-false}, 00:20:01.236 "ddgst": ${ddgst:-false} 00:20:01.236 }, 00:20:01.236 "method": "bdev_nvme_attach_controller" 00:20:01.236 } 00:20:01.236 EOF 00:20:01.236 )") 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:01.236 09:42:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:01.236 "params": { 00:20:01.236 "name": "Nvme0", 00:20:01.236 "trtype": "tcp", 00:20:01.236 "traddr": "10.0.0.3", 00:20:01.236 "adrfam": "ipv4", 00:20:01.236 "trsvcid": "4420", 00:20:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:01.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:01.236 "hdgst": false, 00:20:01.236 "ddgst": false 00:20:01.236 }, 00:20:01.236 "method": "bdev_nvme_attach_controller" 00:20:01.236 },{ 00:20:01.236 "params": { 00:20:01.236 "name": "Nvme1", 00:20:01.236 "trtype": "tcp", 00:20:01.236 "traddr": "10.0.0.3", 00:20:01.236 "adrfam": "ipv4", 00:20:01.236 "trsvcid": "4420", 00:20:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.236 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.236 "hdgst": false, 00:20:01.236 "ddgst": false 00:20:01.236 }, 00:20:01.236 "method": "bdev_nvme_attach_controller" 00:20:01.236 },{ 00:20:01.236 "params": { 00:20:01.236 "name": "Nvme2", 00:20:01.236 "trtype": "tcp", 00:20:01.236 "traddr": "10.0.0.3", 00:20:01.236 "adrfam": "ipv4", 00:20:01.236 "trsvcid": "4420", 00:20:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:01.236 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:01.236 "hdgst": false, 00:20:01.236 "ddgst": false 00:20:01.236 }, 00:20:01.237 "method": "bdev_nvme_attach_controller" 00:20:01.237 }' 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:01.237 09:42:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:01.237 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:01.237 ... 00:20:01.237 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:01.237 ... 00:20:01.237 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:20:01.237 ... 00:20:01.237 fio-3.35 00:20:01.237 Starting 24 threads 00:20:11.205 00:20:11.205 filename0: (groupid=0, jobs=1): err= 0: pid=83106: Tue Nov 5 09:42:57 2024 00:20:11.205 read: IOPS=220, BW=883KiB/s (905kB/s)(8884KiB/10056msec) 00:20:11.205 slat (nsec): min=4524, max=37099, avg=12887.76, stdev=4341.10 00:20:11.205 clat (msec): min=2, max=132, avg=72.26, stdev=25.53 00:20:11.205 lat (msec): min=2, max=132, avg=72.28, stdev=25.53 00:20:11.205 clat percentiles (msec): 00:20:11.205 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 47], 20.00th=[ 53], 00:20:11.205 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 80], 00:20:11.205 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 110], 00:20:11.205 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 132], 00:20:11.205 | 99.99th=[ 133] 00:20:11.205 bw ( KiB/s): min= 646, max= 1936, per=4.35%, avg=881.80, stdev=272.89, samples=20 00:20:11.205 iops : min= 161, max= 484, avg=220.40, stdev=68.26, samples=20 00:20:11.205 lat (msec) : 4=3.60%, 20=2.52%, 50=11.93%, 100=67.49%, 250=14.45% 00:20:11.205 cpu : usr=36.91%, sys=2.40%, ctx=1161, majf=0, minf=9 00:20:11.205 IO depths : 1=0.2%, 2=0.5%, 4=1.6%, 8=81.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:11.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.205 complete : 0=0.0%, 4=87.9%, 8=11.7%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.205 issued rwts: total=2221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.205 filename0: (groupid=0, jobs=1): err= 0: pid=83107: Tue Nov 5 09:42:57 2024 00:20:11.205 read: IOPS=223, BW=893KiB/s (914kB/s)(8932KiB/10005msec) 00:20:11.205 slat (usec): min=4, max=8032, avg=22.36, stdev=239.88 00:20:11.205 clat (msec): min=3, max=120, avg=71.58, stdev=22.44 00:20:11.205 lat (msec): min=3, max=120, avg=71.60, stdev=22.44 00:20:11.205 clat percentiles (msec): 00:20:11.205 | 1.00th=[ 7], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 49], 00:20:11.205 | 30.00th=[ 59], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 73], 00:20:11.205 | 70.00th=[ 84], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 109], 00:20:11.205 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 121], 99.95th=[ 121], 00:20:11.205 | 99.99th=[ 121] 00:20:11.205 bw ( KiB/s): min= 576, max= 1080, per=4.29%, avg=868.63, stdev=140.10, samples=19 00:20:11.205 iops : min= 144, max= 270, avg=217.16, stdev=35.03, samples=19 00:20:11.205 lat (msec) : 4=0.13%, 10=1.34%, 20=0.67%, 50=21.18%, 100=64.58% 00:20:11.205 lat (msec) : 250=12.09% 00:20:11.205 cpu : usr=31.39%, sys=1.84%, ctx=890, majf=0, minf=9 00:20:11.205 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.205 complete : 0=0.0%, 4=87.0%, 8=12.8%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.205 issued rwts: total=2233,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.205 filename0: (groupid=0, jobs=1): err= 0: pid=83108: Tue Nov 5 09:42:57 2024 00:20:11.205 read: IOPS=208, BW=835KiB/s (855kB/s)(8352KiB/10002msec) 00:20:11.205 slat (usec): min=3, max=8028, avg=22.09, stdev=247.97 00:20:11.205 clat (msec): min=2, max=178, avg=76.50, stdev=26.54 00:20:11.205 lat (msec): min=2, max=178, avg=76.52, stdev=26.55 00:20:11.205 clat percentiles (msec): 00:20:11.205 | 1.00th=[ 6], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:20:11.205 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 83], 00:20:11.205 | 70.00th=[ 95], 80.00th=[ 100], 90.00th=[ 109], 95.00th=[ 121], 00:20:11.205 | 99.00th=[ 146], 99.50th=[ 155], 99.90th=[ 157], 99.95th=[ 180], 00:20:11.205 | 99.99th=[ 180] 00:20:11.205 bw ( KiB/s): min= 512, max= 1072, per=3.97%, avg=803.79, stdev=194.91, samples=19 00:20:11.205 iops : min= 128, max= 268, avg=200.95, stdev=48.73, samples=19 00:20:11.205 lat (msec) : 4=0.29%, 10=1.44%, 20=0.57%, 50=19.64%, 100=58.91% 00:20:11.205 lat (msec) : 250=19.16% 00:20:11.205 cpu : usr=31.65%, sys=1.69%, ctx=846, majf=0, minf=9 00:20:11.205 IO depths : 1=0.1%, 2=2.0%, 4=7.8%, 8=75.4%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:11.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.205 complete : 0=0.0%, 4=89.0%, 8=9.3%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.205 issued rwts: total=2088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.205 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.205 filename0: (groupid=0, jobs=1): err= 0: pid=83109: Tue Nov 5 09:42:57 2024 00:20:11.205 read: IOPS=200, BW=800KiB/s (819kB/s)(8020KiB/10024msec) 00:20:11.205 slat (usec): min=3, max=8027, avg=27.83, stdev=261.01 00:20:11.206 clat (msec): min=31, max=154, avg=79.79, stdev=22.72 00:20:11.206 lat (msec): min=31, max=154, avg=79.82, stdev=22.72 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 57], 00:20:11.206 | 30.00th=[ 69], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 83], 00:20:11.206 | 70.00th=[ 95], 80.00th=[ 103], 90.00th=[ 110], 95.00th=[ 115], 00:20:11.206 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 155], 00:20:11.206 | 99.99th=[ 155] 00:20:11.206 bw ( KiB/s): min= 528, max= 1000, per=3.94%, avg=797.80, stdev=166.44, samples=20 00:20:11.206 iops : min= 132, max= 250, avg=199.45, stdev=41.61, samples=20 00:20:11.206 lat (msec) : 50=10.62%, 100=66.98%, 250=22.39% 00:20:11.206 cpu : usr=41.27%, sys=2.43%, ctx=1346, majf=0, minf=9 00:20:11.206 IO depths : 1=0.1%, 2=2.6%, 4=10.7%, 8=71.9%, 16=14.7%, 32=0.0%, >=64=0.0% 00:20:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 complete : 0=0.0%, 4=90.2%, 8=7.4%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.206 filename0: (groupid=0, jobs=1): err= 0: pid=83110: Tue Nov 5 09:42:57 2024 00:20:11.206 read: IOPS=210, BW=841KiB/s (861kB/s)(8416KiB/10009msec) 00:20:11.206 slat (usec): min=4, max=8032, avg=22.46, stdev=247.06 00:20:11.206 clat (msec): min=13, max=157, avg=75.98, stdev=24.60 00:20:11.206 lat (msec): min=13, max=157, avg=76.00, stdev=24.60 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 50], 00:20:11.206 | 30.00th=[ 59], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 78], 00:20:11.206 | 70.00th=[ 85], 80.00th=[ 99], 90.00th=[ 109], 95.00th=[ 120], 00:20:11.206 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 159], 00:20:11.206 | 99.99th=[ 159] 00:20:11.206 bw ( KiB/s): min= 528, max= 1024, per=4.09%, avg=828.21, stdev=190.99, samples=19 00:20:11.206 iops : min= 132, max= 256, avg=207.05, stdev=47.75, samples=19 00:20:11.206 lat (msec) : 20=0.43%, 50=22.20%, 100=58.46%, 250=18.92% 00:20:11.206 cpu : usr=33.38%, sys=1.87%, ctx=978, majf=0, minf=9 00:20:11.206 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=77.2%, 16=14.9%, 32=0.0%, >=64=0.0% 00:20:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 complete : 0=0.0%, 4=88.5%, 8=10.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 issued rwts: total=2104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.206 filename0: (groupid=0, jobs=1): err= 0: pid=83111: Tue Nov 5 09:42:57 2024 00:20:11.206 read: IOPS=219, BW=878KiB/s (899kB/s)(8784KiB/10004msec) 00:20:11.206 slat (nsec): min=4485, max=39772, avg=15239.51, stdev=5296.28 00:20:11.206 clat (msec): min=5, max=156, avg=72.81, stdev=22.50 00:20:11.206 lat (msec): min=5, max=156, avg=72.83, stdev=22.50 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 13], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 51], 00:20:11.206 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:20:11.206 | 70.00th=[ 82], 80.00th=[ 95], 90.00th=[ 108], 95.00th=[ 109], 00:20:11.206 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 144], 99.95th=[ 157], 00:20:11.206 | 99.99th=[ 157] 00:20:11.206 bw ( KiB/s): min= 507, max= 1048, per=4.24%, avg=858.68, stdev=156.62, samples=19 00:20:11.206 iops : min= 126, max= 262, avg=214.63, stdev=39.25, samples=19 00:20:11.206 lat (msec) : 10=0.55%, 20=0.59%, 50=18.62%, 100=65.94%, 250=14.30% 00:20:11.206 cpu : usr=36.75%, sys=1.95%, ctx=1095, majf=0, minf=9 00:20:11.206 IO depths : 1=0.1%, 2=0.4%, 4=1.4%, 8=82.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 complete : 0=0.0%, 4=87.0%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.206 filename0: (groupid=0, jobs=1): err= 0: pid=83112: Tue Nov 5 09:42:57 2024 00:20:11.206 read: IOPS=222, BW=888KiB/s (910kB/s)(8884KiB/10001msec) 00:20:11.206 slat (usec): min=4, max=8030, avg=22.22, stdev=208.36 00:20:11.206 clat (usec): min=1184, max=137044, avg=71946.06, stdev=22885.92 00:20:11.206 lat (usec): min=1193, max=137053, avg=71968.28, stdev=22883.71 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 5], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 50], 00:20:11.206 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:20:11.206 | 70.00th=[ 83], 80.00th=[ 90], 90.00th=[ 106], 95.00th=[ 110], 00:20:11.206 | 99.00th=[ 121], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 138], 00:20:11.206 | 99.99th=[ 138] 00:20:11.206 bw ( KiB/s): min= 592, max= 1024, per=4.24%, avg=858.53, stdev=140.07, samples=19 00:20:11.206 iops : min= 148, max= 256, avg=214.63, stdev=35.02, samples=19 00:20:11.206 lat (msec) : 2=0.59%, 4=0.27%, 10=1.13%, 20=0.72%, 50=18.28% 00:20:11.206 lat (msec) : 100=66.01%, 250=13.01% 00:20:11.206 cpu : usr=36.31%, sys=1.96%, ctx=1144, majf=0, minf=9 00:20:11.206 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:20:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 issued rwts: total=2221,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.206 filename0: (groupid=0, jobs=1): err= 0: pid=83113: Tue Nov 5 09:42:57 2024 00:20:11.206 read: IOPS=213, BW=853KiB/s (874kB/s)(8544KiB/10011msec) 00:20:11.206 slat (usec): min=4, max=8036, avg=24.11, stdev=260.11 00:20:11.206 clat (msec): min=25, max=126, avg=74.90, stdev=19.69 00:20:11.206 lat (msec): min=25, max=126, avg=74.92, stdev=19.70 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 57], 00:20:11.206 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 79], 00:20:11.206 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 106], 95.00th=[ 110], 00:20:11.206 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 126], 99.95th=[ 126], 00:20:11.206 | 99.99th=[ 127] 00:20:11.206 bw ( KiB/s): min= 688, max= 976, per=4.17%, avg=844.53, stdev=106.67, samples=19 00:20:11.206 iops : min= 172, max= 244, avg=211.11, stdev=26.64, samples=19 00:20:11.206 lat (msec) : 50=13.90%, 100=72.38%, 250=13.72% 00:20:11.206 cpu : usr=35.97%, sys=2.22%, ctx=1230, majf=0, minf=9 00:20:11.206 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 issued rwts: total=2136,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.206 filename1: (groupid=0, jobs=1): err= 0: pid=83114: Tue Nov 5 09:42:57 2024 00:20:11.206 read: IOPS=208, BW=836KiB/s (856kB/s)(8396KiB/10045msec) 00:20:11.206 slat (usec): min=3, max=8031, avg=25.50, stdev=302.80 00:20:11.206 clat (msec): min=11, max=134, avg=76.41, stdev=22.40 00:20:11.206 lat (msec): min=11, max=134, avg=76.44, stdev=22.40 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 13], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 59], 00:20:11.206 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:20:11.206 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:20:11.206 | 99.00th=[ 124], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 133], 00:20:11.206 | 99.99th=[ 136] 00:20:11.206 bw ( KiB/s): min= 606, max= 1026, per=4.12%, avg=834.30, stdev=127.31, samples=20 00:20:11.206 iops : min= 151, max= 256, avg=208.50, stdev=31.87, samples=20 00:20:11.206 lat (msec) : 20=2.29%, 50=12.63%, 100=68.75%, 250=16.34% 00:20:11.206 cpu : usr=31.52%, sys=1.82%, ctx=900, majf=0, minf=9 00:20:11.206 IO depths : 1=0.1%, 2=0.7%, 4=2.6%, 8=80.3%, 16=16.4%, 32=0.0%, >=64=0.0% 00:20:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 complete : 0=0.0%, 4=88.3%, 8=11.1%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 issued rwts: total=2099,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.206 filename1: (groupid=0, jobs=1): err= 0: pid=83115: Tue Nov 5 09:42:57 2024 00:20:11.206 read: IOPS=205, BW=822KiB/s (842kB/s)(8224KiB/10007msec) 00:20:11.206 slat (usec): min=3, max=9022, avg=27.38, stdev=293.72 00:20:11.206 clat (msec): min=12, max=165, avg=77.73, stdev=24.35 00:20:11.206 lat (msec): min=12, max=165, avg=77.76, stdev=24.36 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 33], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 53], 00:20:11.206 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:20:11.206 | 70.00th=[ 92], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 120], 00:20:11.206 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 157], 99.95th=[ 167], 00:20:11.206 | 99.99th=[ 167] 00:20:11.206 bw ( KiB/s): min= 512, max= 1024, per=3.97%, avg=803.37, stdev=187.57, samples=19 00:20:11.206 iops : min= 128, max= 256, avg=200.84, stdev=46.89, samples=19 00:20:11.206 lat (msec) : 20=0.63%, 50=14.11%, 100=63.23%, 250=22.03% 00:20:11.206 cpu : usr=42.50%, sys=2.08%, ctx=1240, majf=0, minf=9 00:20:11.206 IO depths : 1=0.1%, 2=2.3%, 4=9.1%, 8=74.0%, 16=14.6%, 32=0.0%, >=64=0.0% 00:20:11.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.206 issued rwts: total=2056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.206 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.206 filename1: (groupid=0, jobs=1): err= 0: pid=83116: Tue Nov 5 09:42:57 2024 00:20:11.206 read: IOPS=202, BW=810KiB/s (829kB/s)(8108KiB/10010msec) 00:20:11.206 slat (usec): min=4, max=8026, avg=26.89, stdev=308.05 00:20:11.206 clat (msec): min=9, max=153, avg=78.86, stdev=22.95 00:20:11.206 lat (msec): min=9, max=153, avg=78.89, stdev=22.95 00:20:11.206 clat percentiles (msec): 00:20:11.206 | 1.00th=[ 21], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 59], 00:20:11.206 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 83], 00:20:11.206 | 70.00th=[ 95], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 111], 00:20:11.206 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 146], 99.95th=[ 155], 00:20:11.206 | 99.99th=[ 155] 00:20:11.206 bw ( KiB/s): min= 528, max= 1024, per=3.90%, avg=789.11, stdev=167.79, samples=19 00:20:11.206 iops : min= 132, max= 256, avg=197.26, stdev=41.96, samples=19 00:20:11.207 lat (msec) : 10=0.15%, 20=0.79%, 50=13.67%, 100=63.39%, 250=22.00% 00:20:11.207 cpu : usr=32.57%, sys=1.73%, ctx=914, majf=0, minf=9 00:20:11.207 IO depths : 1=0.1%, 2=2.8%, 4=11.0%, 8=71.7%, 16=14.5%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=90.1%, 8=7.5%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=2027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename1: (groupid=0, jobs=1): err= 0: pid=83117: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=202, BW=811KiB/s (830kB/s)(8128KiB/10026msec) 00:20:11.207 slat (usec): min=4, max=8027, avg=22.81, stdev=220.29 00:20:11.207 clat (msec): min=21, max=155, avg=78.76, stdev=24.55 00:20:11.207 lat (msec): min=21, max=155, avg=78.79, stdev=24.55 00:20:11.207 clat percentiles (msec): 00:20:11.207 | 1.00th=[ 24], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 54], 00:20:11.207 | 30.00th=[ 66], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 82], 00:20:11.207 | 70.00th=[ 94], 80.00th=[ 105], 90.00th=[ 110], 95.00th=[ 120], 00:20:11.207 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 153], 99.95th=[ 157], 00:20:11.207 | 99.99th=[ 157] 00:20:11.207 bw ( KiB/s): min= 512, max= 1024, per=3.99%, avg=808.40, stdev=176.91, samples=20 00:20:11.207 iops : min= 128, max= 256, avg=202.10, stdev=44.23, samples=20 00:20:11.207 lat (msec) : 50=13.58%, 100=62.94%, 250=23.47% 00:20:11.207 cpu : usr=42.08%, sys=2.47%, ctx=1308, majf=0, minf=9 00:20:11.207 IO depths : 1=0.1%, 2=2.4%, 4=9.6%, 8=73.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=89.8%, 8=8.1%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=2032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename1: (groupid=0, jobs=1): err= 0: pid=83118: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=214, BW=858KiB/s (879kB/s)(8636KiB/10064msec) 00:20:11.207 slat (usec): min=3, max=8025, avg=31.00, stdev=347.07 00:20:11.207 clat (msec): min=2, max=137, avg=74.25, stdev=26.21 00:20:11.207 lat (msec): min=2, max=137, avg=74.28, stdev=26.21 00:20:11.207 clat percentiles (msec): 00:20:11.207 | 1.00th=[ 3], 5.00th=[ 12], 10.00th=[ 47], 20.00th=[ 56], 00:20:11.207 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 80], 00:20:11.207 | 70.00th=[ 85], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 113], 00:20:11.207 | 99.00th=[ 130], 99.50th=[ 132], 99.90th=[ 134], 99.95th=[ 138], 00:20:11.207 | 99.99th=[ 138] 00:20:11.207 bw ( KiB/s): min= 526, max= 1664, per=4.24%, avg=859.40, stdev=228.41, samples=20 00:20:11.207 iops : min= 131, max= 416, avg=214.80, stdev=57.17, samples=20 00:20:11.207 lat (msec) : 4=2.22%, 10=1.48%, 20=2.22%, 50=10.00%, 100=66.28% 00:20:11.207 lat (msec) : 250=17.79% 00:20:11.207 cpu : usr=39.39%, sys=2.23%, ctx=1254, majf=0, minf=9 00:20:11.207 IO depths : 1=0.3%, 2=1.3%, 4=4.4%, 8=78.1%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=88.7%, 8=10.4%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=2159,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename1: (groupid=0, jobs=1): err= 0: pid=83119: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=206, BW=826KiB/s (845kB/s)(8268KiB/10014msec) 00:20:11.207 slat (usec): min=4, max=6853, avg=19.22, stdev=160.95 00:20:11.207 clat (msec): min=21, max=168, avg=77.39, stdev=23.73 00:20:11.207 lat (msec): min=21, max=168, avg=77.41, stdev=23.73 00:20:11.207 clat percentiles (msec): 00:20:11.207 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 53], 00:20:11.207 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 80], 00:20:11.207 | 70.00th=[ 90], 80.00th=[ 105], 90.00th=[ 111], 95.00th=[ 117], 00:20:11.207 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 146], 99.95th=[ 169], 00:20:11.207 | 99.99th=[ 169] 00:20:11.207 bw ( KiB/s): min= 512, max= 1024, per=4.03%, avg=816.84, stdev=186.57, samples=19 00:20:11.207 iops : min= 128, max= 256, avg=204.21, stdev=46.64, samples=19 00:20:11.207 lat (msec) : 50=14.32%, 100=64.39%, 250=21.29% 00:20:11.207 cpu : usr=40.97%, sys=2.25%, ctx=1561, majf=0, minf=9 00:20:11.207 IO depths : 1=0.1%, 2=1.7%, 4=6.5%, 8=76.6%, 16=15.0%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=88.8%, 8=9.8%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=2067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename1: (groupid=0, jobs=1): err= 0: pid=83120: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=212, BW=849KiB/s (869kB/s)(8496KiB/10011msec) 00:20:11.207 slat (usec): min=4, max=4024, avg=17.04, stdev=88.66 00:20:11.207 clat (msec): min=10, max=144, avg=75.32, stdev=21.91 00:20:11.207 lat (msec): min=10, max=144, avg=75.34, stdev=21.91 00:20:11.207 clat percentiles (msec): 00:20:11.207 | 1.00th=[ 25], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:11.207 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 73], 60.00th=[ 80], 00:20:11.207 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 109], 00:20:11.207 | 99.00th=[ 122], 99.50th=[ 132], 99.90th=[ 132], 99.95th=[ 132], 00:20:11.207 | 99.99th=[ 144] 00:20:11.207 bw ( KiB/s): min= 576, max= 1024, per=4.10%, avg=831.16, stdev=142.18, samples=19 00:20:11.207 iops : min= 144, max= 256, avg=207.79, stdev=35.55, samples=19 00:20:11.207 lat (msec) : 20=0.75%, 50=14.03%, 100=69.40%, 250=15.82% 00:20:11.207 cpu : usr=39.13%, sys=1.87%, ctx=1428, majf=0, minf=9 00:20:11.207 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.5%, 16=16.3%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=2124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename1: (groupid=0, jobs=1): err= 0: pid=83121: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=206, BW=825KiB/s (844kB/s)(8256KiB/10011msec) 00:20:11.207 slat (usec): min=4, max=8028, avg=22.69, stdev=249.39 00:20:11.207 clat (msec): min=24, max=156, avg=77.44, stdev=25.70 00:20:11.207 lat (msec): min=24, max=156, avg=77.46, stdev=25.71 00:20:11.207 clat percentiles (msec): 00:20:11.207 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 51], 00:20:11.207 | 30.00th=[ 61], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 82], 00:20:11.207 | 70.00th=[ 85], 80.00th=[ 107], 90.00th=[ 109], 95.00th=[ 132], 00:20:11.207 | 99.00th=[ 144], 99.50th=[ 146], 99.90th=[ 157], 99.95th=[ 157], 00:20:11.207 | 99.99th=[ 157] 00:20:11.207 bw ( KiB/s): min= 512, max= 1024, per=4.02%, avg=813.84, stdev=188.85, samples=19 00:20:11.207 iops : min= 128, max= 256, avg=203.42, stdev=47.19, samples=19 00:20:11.207 lat (msec) : 50=20.40%, 100=56.73%, 250=22.87% 00:20:11.207 cpu : usr=31.52%, sys=1.64%, ctx=885, majf=0, minf=9 00:20:11.207 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=76.9%, 16=15.1%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=88.7%, 8=9.9%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=2064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename2: (groupid=0, jobs=1): err= 0: pid=83122: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=207, BW=828KiB/s (848kB/s)(8316KiB/10039msec) 00:20:11.207 slat (usec): min=5, max=8029, avg=25.37, stdev=304.27 00:20:11.207 clat (msec): min=14, max=156, avg=77.05, stdev=22.91 00:20:11.207 lat (msec): min=14, max=156, avg=77.07, stdev=22.92 00:20:11.207 clat percentiles (msec): 00:20:11.207 | 1.00th=[ 16], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 61], 00:20:11.207 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 82], 00:20:11.207 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:20:11.207 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 157], 99.95th=[ 157], 00:20:11.207 | 99.99th=[ 157] 00:20:11.207 bw ( KiB/s): min= 542, max= 1136, per=4.08%, avg=827.40, stdev=138.08, samples=20 00:20:11.207 iops : min= 135, max= 284, avg=206.80, stdev=34.59, samples=20 00:20:11.207 lat (msec) : 20=2.21%, 50=10.73%, 100=70.47%, 250=16.59% 00:20:11.207 cpu : usr=31.66%, sys=1.56%, ctx=851, majf=0, minf=9 00:20:11.207 IO depths : 1=0.1%, 2=0.6%, 4=2.3%, 8=80.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=88.3%, 8=11.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=2079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename2: (groupid=0, jobs=1): err= 0: pid=83123: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=193, BW=772KiB/s (791kB/s)(7748KiB/10034msec) 00:20:11.207 slat (usec): min=3, max=8025, avg=24.76, stdev=272.92 00:20:11.207 clat (msec): min=12, max=154, avg=82.67, stdev=23.17 00:20:11.207 lat (msec): min=12, max=155, avg=82.69, stdev=23.17 00:20:11.207 clat percentiles (msec): 00:20:11.207 | 1.00th=[ 22], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 67], 00:20:11.207 | 30.00th=[ 72], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 85], 00:20:11.207 | 70.00th=[ 97], 80.00th=[ 106], 90.00th=[ 111], 95.00th=[ 118], 00:20:11.207 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 155], 99.95th=[ 155], 00:20:11.207 | 99.99th=[ 155] 00:20:11.207 bw ( KiB/s): min= 526, max= 1008, per=3.80%, avg=770.60, stdev=156.23, samples=20 00:20:11.207 iops : min= 131, max= 252, avg=192.60, stdev=39.12, samples=20 00:20:11.207 lat (msec) : 20=0.83%, 50=8.57%, 100=63.76%, 250=26.85% 00:20:11.207 cpu : usr=43.09%, sys=2.49%, ctx=1229, majf=0, minf=9 00:20:11.207 IO depths : 1=0.1%, 2=4.1%, 4=16.6%, 8=65.3%, 16=13.9%, 32=0.0%, >=64=0.0% 00:20:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 complete : 0=0.0%, 4=92.0%, 8=4.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.207 issued rwts: total=1937,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.207 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.207 filename2: (groupid=0, jobs=1): err= 0: pid=83124: Tue Nov 5 09:42:57 2024 00:20:11.207 read: IOPS=219, BW=877KiB/s (899kB/s)(8796KiB/10024msec) 00:20:11.207 slat (usec): min=5, max=8041, avg=26.51, stdev=270.27 00:20:11.208 clat (msec): min=21, max=153, avg=72.72, stdev=21.55 00:20:11.208 lat (msec): min=21, max=153, avg=72.75, stdev=21.56 00:20:11.208 clat percentiles (msec): 00:20:11.208 | 1.00th=[ 27], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 51], 00:20:11.208 | 30.00th=[ 58], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:20:11.208 | 70.00th=[ 81], 80.00th=[ 92], 90.00th=[ 107], 95.00th=[ 110], 00:20:11.208 | 99.00th=[ 121], 99.50th=[ 122], 99.90th=[ 138], 99.95th=[ 155], 00:20:11.208 | 99.99th=[ 155] 00:20:11.208 bw ( KiB/s): min= 600, max= 1024, per=4.32%, avg=876.00, stdev=128.85, samples=20 00:20:11.208 iops : min= 150, max= 256, avg=219.00, stdev=32.21, samples=20 00:20:11.208 lat (msec) : 50=19.33%, 100=65.53%, 250=15.14% 00:20:11.208 cpu : usr=43.04%, sys=2.40%, ctx=1267, majf=0, minf=9 00:20:11.208 IO depths : 1=0.1%, 2=0.5%, 4=2.0%, 8=81.9%, 16=15.6%, 32=0.0%, >=64=0.0% 00:20:11.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 complete : 0=0.0%, 4=87.3%, 8=12.3%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.208 filename2: (groupid=0, jobs=1): err= 0: pid=83125: Tue Nov 5 09:42:57 2024 00:20:11.208 read: IOPS=211, BW=848KiB/s (868kB/s)(8492KiB/10019msec) 00:20:11.208 slat (usec): min=5, max=8032, avg=28.86, stdev=332.44 00:20:11.208 clat (msec): min=26, max=156, avg=75.36, stdev=21.64 00:20:11.208 lat (msec): min=26, max=156, avg=75.39, stdev=21.64 00:20:11.208 clat percentiles (msec): 00:20:11.208 | 1.00th=[ 32], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 57], 00:20:11.208 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 80], 00:20:11.208 | 70.00th=[ 84], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:20:11.208 | 99.00th=[ 131], 99.50th=[ 132], 99.90th=[ 144], 99.95th=[ 157], 00:20:11.208 | 99.99th=[ 157] 00:20:11.208 bw ( KiB/s): min= 576, max= 976, per=4.16%, avg=842.80, stdev=118.55, samples=20 00:20:11.208 iops : min= 144, max= 244, avg=210.70, stdev=29.64, samples=20 00:20:11.208 lat (msec) : 50=16.86%, 100=67.59%, 250=15.54% 00:20:11.208 cpu : usr=32.33%, sys=1.86%, ctx=923, majf=0, minf=9 00:20:11.208 IO depths : 1=0.1%, 2=0.4%, 4=1.6%, 8=81.8%, 16=16.1%, 32=0.0%, >=64=0.0% 00:20:11.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 complete : 0=0.0%, 4=87.6%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 issued rwts: total=2123,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.208 filename2: (groupid=0, jobs=1): err= 0: pid=83126: Tue Nov 5 09:42:57 2024 00:20:11.208 read: IOPS=219, BW=879KiB/s (900kB/s)(8824KiB/10039msec) 00:20:11.208 slat (usec): min=3, max=3050, avg=16.91, stdev=91.28 00:20:11.208 clat (msec): min=12, max=128, avg=72.62, stdev=21.69 00:20:11.208 lat (msec): min=12, max=128, avg=72.64, stdev=21.70 00:20:11.208 clat percentiles (msec): 00:20:11.208 | 1.00th=[ 14], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 53], 00:20:11.208 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 78], 00:20:11.208 | 70.00th=[ 82], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 109], 00:20:11.208 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 127], 00:20:11.208 | 99.99th=[ 129] 00:20:11.208 bw ( KiB/s): min= 686, max= 1216, per=4.34%, avg=878.60, stdev=142.41, samples=20 00:20:11.208 iops : min= 171, max= 304, avg=219.60, stdev=35.67, samples=20 00:20:11.208 lat (msec) : 20=2.18%, 50=14.14%, 100=69.99%, 250=13.69% 00:20:11.208 cpu : usr=38.94%, sys=2.25%, ctx=1605, majf=0, minf=9 00:20:11.208 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:20:11.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 complete : 0=0.0%, 4=87.4%, 8=12.3%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 issued rwts: total=2206,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.208 filename2: (groupid=0, jobs=1): err= 0: pid=83127: Tue Nov 5 09:42:57 2024 00:20:11.208 read: IOPS=220, BW=881KiB/s (902kB/s)(8836KiB/10030msec) 00:20:11.208 slat (usec): min=3, max=8030, avg=32.49, stdev=305.90 00:20:11.208 clat (msec): min=24, max=119, avg=72.45, stdev=19.82 00:20:11.208 lat (msec): min=24, max=119, avg=72.48, stdev=19.83 00:20:11.208 clat percentiles (msec): 00:20:11.208 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:20:11.208 | 30.00th=[ 59], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 77], 00:20:11.208 | 70.00th=[ 81], 80.00th=[ 89], 90.00th=[ 104], 95.00th=[ 110], 00:20:11.208 | 99.00th=[ 116], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:20:11.208 | 99.99th=[ 121] 00:20:11.208 bw ( KiB/s): min= 664, max= 1024, per=4.33%, avg=876.90, stdev=111.50, samples=20 00:20:11.208 iops : min= 166, max= 256, avg=219.15, stdev=27.94, samples=20 00:20:11.208 lat (msec) : 50=16.48%, 100=72.20%, 250=11.32% 00:20:11.208 cpu : usr=41.60%, sys=2.57%, ctx=1289, majf=0, minf=9 00:20:11.208 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:20:11.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 issued rwts: total=2209,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.208 filename2: (groupid=0, jobs=1): err= 0: pid=83128: Tue Nov 5 09:42:57 2024 00:20:11.208 read: IOPS=217, BW=870KiB/s (891kB/s)(8736KiB/10039msec) 00:20:11.208 slat (usec): min=4, max=8035, avg=27.12, stdev=284.34 00:20:11.208 clat (msec): min=11, max=128, avg=73.32, stdev=21.45 00:20:11.208 lat (msec): min=11, max=128, avg=73.35, stdev=21.45 00:20:11.208 clat percentiles (msec): 00:20:11.208 | 1.00th=[ 15], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 54], 00:20:11.208 | 30.00th=[ 64], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 79], 00:20:11.208 | 70.00th=[ 82], 80.00th=[ 92], 90.00th=[ 106], 95.00th=[ 110], 00:20:11.208 | 99.00th=[ 117], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:20:11.208 | 99.99th=[ 129] 00:20:11.208 bw ( KiB/s): min= 688, max= 1192, per=4.29%, avg=869.80, stdev=131.37, samples=20 00:20:11.208 iops : min= 172, max= 298, avg=217.40, stdev=32.91, samples=20 00:20:11.208 lat (msec) : 20=2.11%, 50=13.19%, 100=70.38%, 250=14.33% 00:20:11.208 cpu : usr=42.76%, sys=2.86%, ctx=1232, majf=0, minf=9 00:20:11.208 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=82.3%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:11.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 complete : 0=0.0%, 4=87.5%, 8=12.3%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 issued rwts: total=2184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.208 filename2: (groupid=0, jobs=1): err= 0: pid=83129: Tue Nov 5 09:42:57 2024 00:20:11.208 read: IOPS=216, BW=865KiB/s (885kB/s)(8676KiB/10035msec) 00:20:11.208 slat (usec): min=3, max=8027, avg=19.99, stdev=192.44 00:20:11.208 clat (msec): min=21, max=131, avg=73.89, stdev=20.55 00:20:11.208 lat (msec): min=21, max=131, avg=73.91, stdev=20.55 00:20:11.208 clat percentiles (msec): 00:20:11.208 | 1.00th=[ 23], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 55], 00:20:11.208 | 30.00th=[ 62], 40.00th=[ 72], 50.00th=[ 72], 60.00th=[ 75], 00:20:11.208 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 107], 95.00th=[ 108], 00:20:11.208 | 99.00th=[ 120], 99.50th=[ 121], 99.90th=[ 123], 99.95th=[ 130], 00:20:11.208 | 99.99th=[ 132] 00:20:11.208 bw ( KiB/s): min= 686, max= 1120, per=4.26%, avg=862.20, stdev=121.55, samples=20 00:20:11.208 iops : min= 171, max= 280, avg=215.50, stdev=30.45, samples=20 00:20:11.208 lat (msec) : 50=16.97%, 100=70.59%, 250=12.45% 00:20:11.208 cpu : usr=31.33%, sys=1.91%, ctx=855, majf=0, minf=9 00:20:11.208 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.8%, 16=16.2%, 32=0.0%, >=64=0.0% 00:20:11.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 complete : 0=0.0%, 4=87.4%, 8=12.5%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:11.208 issued rwts: total=2169,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:11.208 latency : target=0, window=0, percentile=100.00%, depth=16 00:20:11.208 00:20:11.208 Run status group 0 (all jobs): 00:20:11.208 READ: bw=19.8MiB/s (20.7MB/s), 772KiB/s-893KiB/s (791kB/s-914kB/s), io=199MiB (209MB), run=10001-10064msec 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 bdev_null0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.467 [2024-11-05 09:42:57.416677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.467 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.726 bdev_null1 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.726 { 00:20:11.726 "params": { 00:20:11.726 "name": "Nvme$subsystem", 00:20:11.726 "trtype": "$TEST_TRANSPORT", 00:20:11.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.726 "adrfam": "ipv4", 00:20:11.726 "trsvcid": "$NVMF_PORT", 00:20:11.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.726 "hdgst": ${hdgst:-false}, 00:20:11.726 "ddgst": ${ddgst:-false} 00:20:11.726 }, 00:20:11.726 "method": "bdev_nvme_attach_controller" 00:20:11.726 } 00:20:11.726 EOF 00:20:11.726 )") 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:11.726 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:11.727 { 00:20:11.727 "params": { 00:20:11.727 "name": "Nvme$subsystem", 00:20:11.727 "trtype": "$TEST_TRANSPORT", 00:20:11.727 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:11.727 "adrfam": "ipv4", 00:20:11.727 "trsvcid": "$NVMF_PORT", 00:20:11.727 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:11.727 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:11.727 "hdgst": ${hdgst:-false}, 00:20:11.727 "ddgst": ${ddgst:-false} 00:20:11.727 }, 00:20:11.727 "method": "bdev_nvme_attach_controller" 00:20:11.727 } 00:20:11.727 EOF 00:20:11.727 )") 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:11.727 "params": { 00:20:11.727 "name": "Nvme0", 00:20:11.727 "trtype": "tcp", 00:20:11.727 "traddr": "10.0.0.3", 00:20:11.727 "adrfam": "ipv4", 00:20:11.727 "trsvcid": "4420", 00:20:11.727 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:11.727 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:11.727 "hdgst": false, 00:20:11.727 "ddgst": false 00:20:11.727 }, 00:20:11.727 "method": "bdev_nvme_attach_controller" 00:20:11.727 },{ 00:20:11.727 "params": { 00:20:11.727 "name": "Nvme1", 00:20:11.727 "trtype": "tcp", 00:20:11.727 "traddr": "10.0.0.3", 00:20:11.727 "adrfam": "ipv4", 00:20:11.727 "trsvcid": "4420", 00:20:11.727 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:11.727 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:11.727 "hdgst": false, 00:20:11.727 "ddgst": false 00:20:11.727 }, 00:20:11.727 "method": "bdev_nvme_attach_controller" 00:20:11.727 }' 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:11.727 09:42:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:11.727 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:11.727 ... 00:20:11.727 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:20:11.727 ... 00:20:11.727 fio-3.35 00:20:11.727 Starting 4 threads 00:20:18.336 00:20:18.336 filename0: (groupid=0, jobs=1): err= 0: pid=83275: Tue Nov 5 09:43:03 2024 00:20:18.336 read: IOPS=1988, BW=15.5MiB/s (16.3MB/s)(77.7MiB/5003msec) 00:20:18.336 slat (nsec): min=3760, max=41971, avg=14684.97, stdev=2943.97 00:20:18.336 clat (usec): min=1303, max=9887, avg=3972.90, stdev=732.10 00:20:18.336 lat (usec): min=1317, max=9902, avg=3987.58, stdev=732.00 00:20:18.337 clat percentiles (usec): 00:20:18.337 | 1.00th=[ 1565], 5.00th=[ 2638], 10.00th=[ 3326], 20.00th=[ 3425], 00:20:18.337 | 30.00th=[ 3884], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 3982], 00:20:18.337 | 70.00th=[ 4178], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5014], 00:20:18.337 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 7177], 99.95th=[ 7832], 00:20:18.337 | 99.99th=[ 9896] 00:20:18.337 bw ( KiB/s): min=14080, max=17376, per=24.28%, avg=15701.33, stdev=1026.59, samples=9 00:20:18.337 iops : min= 1760, max= 2172, avg=1962.67, stdev=128.32, samples=9 00:20:18.337 lat (msec) : 2=2.51%, 4=63.41%, 10=34.08% 00:20:18.337 cpu : usr=92.24%, sys=6.92%, ctx=10, majf=0, minf=1 00:20:18.337 IO depths : 1=0.1%, 2=16.0%, 4=56.6%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 issued rwts: total=9948,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:18.337 filename0: (groupid=0, jobs=1): err= 0: pid=83276: Tue Nov 5 09:43:03 2024 00:20:18.337 read: IOPS=2121, BW=16.6MiB/s (17.4MB/s)(82.9MiB/5002msec) 00:20:18.337 slat (nsec): min=7728, max=39611, avg=12573.15, stdev=3574.42 00:20:18.337 clat (usec): min=1001, max=7187, avg=3730.47, stdev=780.56 00:20:18.337 lat (usec): min=1010, max=7202, avg=3743.04, stdev=780.97 00:20:18.337 clat percentiles (usec): 00:20:18.337 | 1.00th=[ 1516], 5.00th=[ 2245], 10.00th=[ 2540], 20.00th=[ 3326], 00:20:18.337 | 30.00th=[ 3458], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:20:18.337 | 70.00th=[ 3982], 80.00th=[ 4228], 90.00th=[ 4555], 95.00th=[ 4883], 00:20:18.337 | 99.00th=[ 5276], 99.50th=[ 5276], 99.90th=[ 5800], 99.95th=[ 6783], 00:20:18.337 | 99.99th=[ 7111] 00:20:18.337 bw ( KiB/s): min=16096, max=18052, per=26.50%, avg=17138.22, stdev=814.46, samples=9 00:20:18.337 iops : min= 2012, max= 2256, avg=2142.22, stdev=101.74, samples=9 00:20:18.337 lat (msec) : 2=2.18%, 4=71.39%, 10=26.44% 00:20:18.337 cpu : usr=92.48%, sys=6.62%, ctx=12, majf=0, minf=0 00:20:18.337 IO depths : 1=0.1%, 2=11.3%, 4=59.4%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 issued rwts: total=10614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:18.337 filename1: (groupid=0, jobs=1): err= 0: pid=83277: Tue Nov 5 09:43:03 2024 00:20:18.337 read: IOPS=2086, BW=16.3MiB/s (17.1MB/s)(81.5MiB/5003msec) 00:20:18.337 slat (nsec): min=3688, max=39710, avg=14342.20, stdev=3202.19 00:20:18.337 clat (usec): min=1003, max=9872, avg=3788.32, stdev=778.08 00:20:18.337 lat (usec): min=1012, max=9885, avg=3802.66, stdev=778.03 00:20:18.337 clat percentiles (usec): 00:20:18.337 | 1.00th=[ 1647], 5.00th=[ 2245], 10.00th=[ 2540], 20.00th=[ 3359], 00:20:18.337 | 30.00th=[ 3752], 40.00th=[ 3884], 50.00th=[ 3916], 60.00th=[ 3949], 00:20:18.337 | 70.00th=[ 3982], 80.00th=[ 4293], 90.00th=[ 4621], 95.00th=[ 4883], 00:20:18.337 | 99.00th=[ 5276], 99.50th=[ 5407], 99.90th=[ 7111], 99.95th=[ 7832], 00:20:18.337 | 99.99th=[ 7832] 00:20:18.337 bw ( KiB/s): min=15632, max=18080, per=26.01%, avg=16821.33, stdev=826.68, samples=9 00:20:18.337 iops : min= 1954, max= 2260, avg=2102.67, stdev=103.33, samples=9 00:20:18.337 lat (msec) : 2=1.55%, 4=70.54%, 10=27.91% 00:20:18.337 cpu : usr=91.92%, sys=7.20%, ctx=9, majf=0, minf=0 00:20:18.337 IO depths : 1=0.1%, 2=12.5%, 4=58.6%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 complete : 0=0.0%, 4=95.2%, 8=4.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 issued rwts: total=10438,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:18.337 filename1: (groupid=0, jobs=1): err= 0: pid=83278: Tue Nov 5 09:43:03 2024 00:20:18.337 read: IOPS=1888, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5002msec) 00:20:18.337 slat (nsec): min=4558, max=61947, avg=14202.72, stdev=3907.77 00:20:18.337 clat (usec): min=751, max=6832, avg=4182.97, stdev=720.40 00:20:18.337 lat (usec): min=760, max=6845, avg=4197.17, stdev=719.97 00:20:18.337 clat percentiles (usec): 00:20:18.337 | 1.00th=[ 2057], 5.00th=[ 3326], 10.00th=[ 3392], 20.00th=[ 3884], 00:20:18.337 | 30.00th=[ 3916], 40.00th=[ 3916], 50.00th=[ 3949], 60.00th=[ 4047], 00:20:18.337 | 70.00th=[ 4555], 80.00th=[ 4621], 90.00th=[ 5080], 95.00th=[ 5407], 00:20:18.337 | 99.00th=[ 6259], 99.50th=[ 6259], 99.90th=[ 6390], 99.95th=[ 6390], 00:20:18.337 | 99.99th=[ 6849] 00:20:18.337 bw ( KiB/s): min=12944, max=16512, per=23.33%, avg=15089.33, stdev=1149.18, samples=9 00:20:18.337 iops : min= 1618, max= 2064, avg=1886.11, stdev=143.70, samples=9 00:20:18.337 lat (usec) : 1000=0.14% 00:20:18.337 lat (msec) : 2=0.60%, 4=58.29%, 10=40.97% 00:20:18.337 cpu : usr=91.90%, sys=7.20%, ctx=648, majf=0, minf=0 00:20:18.337 IO depths : 1=0.1%, 2=19.4%, 4=54.3%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:18.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 complete : 0=0.0%, 4=92.3%, 8=7.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:18.337 issued rwts: total=9446,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:18.337 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:18.337 00:20:18.337 Run status group 0 (all jobs): 00:20:18.337 READ: bw=63.2MiB/s (66.2MB/s), 14.8MiB/s-16.6MiB/s (15.5MB/s-17.4MB/s), io=316MiB (331MB), run=5002-5003msec 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.337 ************************************ 00:20:18.337 END TEST fio_dif_rand_params 00:20:18.337 ************************************ 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.337 00:20:18.337 real 0m23.237s 00:20:18.337 user 2m3.163s 00:20:18.337 sys 0m8.540s 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:18.337 09:43:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:20:18.337 09:43:03 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:20:18.337 09:43:03 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:18.337 09:43:03 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:18.337 09:43:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:18.337 ************************************ 00:20:18.337 START TEST fio_dif_digest 00:20:18.337 ************************************ 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:20:18.337 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.338 bdev_null0 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:18.338 [2024-11-05 09:43:03.484874] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:20:18.338 { 00:20:18.338 "params": { 00:20:18.338 "name": "Nvme$subsystem", 00:20:18.338 "trtype": "$TEST_TRANSPORT", 00:20:18.338 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:18.338 "adrfam": "ipv4", 00:20:18.338 "trsvcid": "$NVMF_PORT", 00:20:18.338 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:18.338 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:18.338 "hdgst": ${hdgst:-false}, 00:20:18.338 "ddgst": ${ddgst:-false} 00:20:18.338 }, 00:20:18.338 "method": "bdev_nvme_attach_controller" 00:20:18.338 } 00:20:18.338 EOF 00:20:18.338 )") 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:20:18.338 "params": { 00:20:18.338 "name": "Nvme0", 00:20:18.338 "trtype": "tcp", 00:20:18.338 "traddr": "10.0.0.3", 00:20:18.338 "adrfam": "ipv4", 00:20:18.338 "trsvcid": "4420", 00:20:18.338 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:18.338 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:20:18.338 "hdgst": true, 00:20:18.338 "ddgst": true 00:20:18.338 }, 00:20:18.338 "method": "bdev_nvme_attach_controller" 00:20:18.338 }' 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:18.338 09:43:03 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:20:18.338 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:20:18.338 ... 00:20:18.338 fio-3.35 00:20:18.338 Starting 3 threads 00:20:28.309 00:20:28.309 filename0: (groupid=0, jobs=1): err= 0: pid=83384: Tue Nov 5 09:43:14 2024 00:20:28.309 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(279MiB/10005msec) 00:20:28.309 slat (nsec): min=7703, max=43406, avg=11054.78, stdev=4052.98 00:20:28.309 clat (usec): min=9305, max=17658, avg=13437.85, stdev=280.59 00:20:28.309 lat (usec): min=9313, max=17674, avg=13448.90, stdev=280.66 00:20:28.309 clat percentiles (usec): 00:20:28.309 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13304], 20.00th=[13304], 00:20:28.309 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:20:28.309 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13566], 95.00th=[13566], 00:20:28.309 | 99.00th=[13960], 99.50th=[14091], 99.90th=[17695], 99.95th=[17695], 00:20:28.309 | 99.99th=[17695] 00:20:28.309 bw ( KiB/s): min=27648, max=29184, per=33.31%, avg=28496.84, stdev=352.38, samples=19 00:20:28.309 iops : min= 216, max= 228, avg=222.63, stdev= 2.75, samples=19 00:20:28.309 lat (msec) : 10=0.13%, 20=99.87% 00:20:28.309 cpu : usr=90.22%, sys=9.20%, ctx=10, majf=0, minf=0 00:20:28.309 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:28.309 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.309 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.309 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.309 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:28.310 filename0: (groupid=0, jobs=1): err= 0: pid=83385: Tue Nov 5 09:43:14 2024 00:20:28.310 read: IOPS=222, BW=27.9MiB/s (29.2MB/s)(279MiB/10003msec) 00:20:28.310 slat (nsec): min=7827, max=67107, avg=10627.39, stdev=3714.85 00:20:28.310 clat (usec): min=6809, max=17111, avg=13436.10, stdev=327.77 00:20:28.310 lat (usec): min=6817, max=17140, avg=13446.73, stdev=327.81 00:20:28.310 clat percentiles (usec): 00:20:28.310 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13304], 20.00th=[13304], 00:20:28.310 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:20:28.310 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13566], 95.00th=[13566], 00:20:28.310 | 99.00th=[13960], 99.50th=[14091], 99.90th=[17171], 99.95th=[17171], 00:20:28.310 | 99.99th=[17171] 00:20:28.310 bw ( KiB/s): min=28416, max=29184, per=33.36%, avg=28537.26, stdev=287.72, samples=19 00:20:28.310 iops : min= 222, max= 228, avg=222.95, stdev= 2.25, samples=19 00:20:28.310 lat (msec) : 10=0.13%, 20=99.87% 00:20:28.310 cpu : usr=89.03%, sys=10.43%, ctx=21, majf=0, minf=0 00:20:28.310 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:28.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.310 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.310 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:28.310 filename0: (groupid=0, jobs=1): err= 0: pid=83386: Tue Nov 5 09:43:14 2024 00:20:28.310 read: IOPS=222, BW=27.8MiB/s (29.2MB/s)(279MiB/10005msec) 00:20:28.310 slat (nsec): min=7765, max=42508, avg=11086.88, stdev=4448.86 00:20:28.310 clat (usec): min=9229, max=18151, avg=13437.62, stdev=318.34 00:20:28.310 lat (usec): min=9237, max=18166, avg=13448.71, stdev=318.57 00:20:28.310 clat percentiles (usec): 00:20:28.310 | 1.00th=[13304], 5.00th=[13304], 10.00th=[13304], 20.00th=[13304], 00:20:28.310 | 30.00th=[13304], 40.00th=[13435], 50.00th=[13435], 60.00th=[13435], 00:20:28.310 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13566], 95.00th=[13566], 00:20:28.310 | 99.00th=[13960], 99.50th=[14091], 99.90th=[18220], 99.95th=[18220], 00:20:28.310 | 99.99th=[18220] 00:20:28.310 bw ( KiB/s): min=27648, max=29184, per=33.31%, avg=28496.84, stdev=352.38, samples=19 00:20:28.310 iops : min= 216, max= 228, avg=222.63, stdev= 2.75, samples=19 00:20:28.310 lat (msec) : 10=0.13%, 20=99.87% 00:20:28.310 cpu : usr=88.95%, sys=10.47%, ctx=19, majf=0, minf=0 00:20:28.310 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:28.310 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.310 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:28.310 issued rwts: total=2229,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:28.310 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:28.310 00:20:28.310 Run status group 0 (all jobs): 00:20:28.310 READ: bw=83.5MiB/s (87.6MB/s), 27.8MiB/s-27.9MiB/s (29.2MB/s-29.2MB/s), io=836MiB (876MB), run=10003-10005msec 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:28.568 ************************************ 00:20:28.568 END TEST fio_dif_digest 00:20:28.568 ************************************ 00:20:28.568 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.568 00:20:28.569 real 0m10.928s 00:20:28.569 user 0m27.415s 00:20:28.569 sys 0m3.254s 00:20:28.569 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:28.569 09:43:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:20:28.569 09:43:14 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:28.569 09:43:14 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:28.569 rmmod nvme_tcp 00:20:28.569 rmmod nvme_fabrics 00:20:28.569 rmmod nvme_keyring 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 82638 ']' 00:20:28.569 09:43:14 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 82638 00:20:28.569 09:43:14 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 82638 ']' 00:20:28.569 09:43:14 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 82638 00:20:28.569 09:43:14 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:20:28.569 09:43:14 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:28.569 09:43:14 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 82638 00:20:28.827 killing process with pid 82638 00:20:28.827 09:43:14 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:28.827 09:43:14 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:28.827 09:43:14 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 82638' 00:20:28.827 09:43:14 nvmf_dif -- common/autotest_common.sh@971 -- # kill 82638 00:20:28.827 09:43:14 nvmf_dif -- common/autotest_common.sh@976 -- # wait 82638 00:20:28.827 09:43:14 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:28.827 09:43:14 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:29.085 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:29.085 Waiting for block devices as requested 00:20:29.344 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:29.344 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:29.344 09:43:15 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.603 09:43:15 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:29.603 09:43:15 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.603 09:43:15 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:20:29.603 ************************************ 00:20:29.603 END TEST nvmf_dif 00:20:29.603 ************************************ 00:20:29.603 00:20:29.603 real 0m58.847s 00:20:29.603 user 3m45.582s 00:20:29.603 sys 0m20.089s 00:20:29.603 09:43:15 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:29.603 09:43:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:20:29.603 09:43:15 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:29.603 09:43:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:29.603 09:43:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:29.603 09:43:15 -- common/autotest_common.sh@10 -- # set +x 00:20:29.603 ************************************ 00:20:29.603 START TEST nvmf_abort_qd_sizes 00:20:29.603 ************************************ 00:20:29.603 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:29.862 * Looking for test storage... 00:20:29.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:20:29.862 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.863 --rc genhtml_branch_coverage=1 00:20:29.863 --rc genhtml_function_coverage=1 00:20:29.863 --rc genhtml_legend=1 00:20:29.863 --rc geninfo_all_blocks=1 00:20:29.863 --rc geninfo_unexecuted_blocks=1 00:20:29.863 00:20:29.863 ' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.863 --rc genhtml_branch_coverage=1 00:20:29.863 --rc genhtml_function_coverage=1 00:20:29.863 --rc genhtml_legend=1 00:20:29.863 --rc geninfo_all_blocks=1 00:20:29.863 --rc geninfo_unexecuted_blocks=1 00:20:29.863 00:20:29.863 ' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.863 --rc genhtml_branch_coverage=1 00:20:29.863 --rc genhtml_function_coverage=1 00:20:29.863 --rc genhtml_legend=1 00:20:29.863 --rc geninfo_all_blocks=1 00:20:29.863 --rc geninfo_unexecuted_blocks=1 00:20:29.863 00:20:29.863 ' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:29.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:29.863 --rc genhtml_branch_coverage=1 00:20:29.863 --rc genhtml_function_coverage=1 00:20:29.863 --rc genhtml_legend=1 00:20:29.863 --rc geninfo_all_blocks=1 00:20:29.863 --rc geninfo_unexecuted_blocks=1 00:20:29.863 00:20:29.863 ' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:29.863 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:29.863 Cannot find device "nvmf_init_br" 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:29.863 Cannot find device "nvmf_init_br2" 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:29.863 Cannot find device "nvmf_tgt_br" 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:20:29.863 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:30.122 Cannot find device "nvmf_tgt_br2" 00:20:30.122 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:30.123 Cannot find device "nvmf_init_br" 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:30.123 Cannot find device "nvmf_init_br2" 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:30.123 Cannot find device "nvmf_tgt_br" 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:30.123 Cannot find device "nvmf_tgt_br2" 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:30.123 Cannot find device "nvmf_br" 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:30.123 Cannot find device "nvmf_init_if" 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:30.123 Cannot find device "nvmf_init_if2" 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:30.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:30.123 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:30.123 09:43:15 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:30.123 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:30.382 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:30.382 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.075 ms 00:20:30.382 00:20:30.382 --- 10.0.0.3 ping statistics --- 00:20:30.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.382 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:30.382 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:30.382 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:20:30.382 00:20:30.382 --- 10.0.0.4 ping statistics --- 00:20:30.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.382 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:30.382 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:30.382 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:30.382 00:20:30.382 --- 10.0.0.1 ping statistics --- 00:20:30.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.382 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:30.382 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:30.382 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:20:30.382 00:20:30.382 --- 10.0.0.2 ping statistics --- 00:20:30.382 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:30.382 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:20:30.382 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:30.949 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.949 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:30.949 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=84023 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 84023 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 84023 ']' 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:31.207 09:43:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:31.207 [2024-11-05 09:43:17.042308] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:20:31.207 [2024-11-05 09:43:17.042572] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.465 [2024-11-05 09:43:17.199718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:31.465 [2024-11-05 09:43:17.240102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:31.465 [2024-11-05 09:43:17.240158] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:31.465 [2024-11-05 09:43:17.240172] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:31.465 [2024-11-05 09:43:17.240182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:31.465 [2024-11-05 09:43:17.240201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:31.465 [2024-11-05 09:43:17.241063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:31.465 [2024-11-05 09:43:17.241823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:31.465 [2024-11-05 09:43:17.241923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:31.465 [2024-11-05 09:43:17.241941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:31.465 [2024-11-05 09:43:17.276550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:20:31.465 09:43:17 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:20:31.466 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:31.466 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:31.466 09:43:17 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:31.466 ************************************ 00:20:31.466 START TEST spdk_target_abort 00:20:31.466 ************************************ 00:20:31.466 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:20:31.466 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:31.466 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:20:31.466 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.466 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 spdk_targetn1 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 [2024-11-05 09:43:17.484874] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:31.724 [2024-11-05 09:43:17.517004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:31.724 09:43:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:35.014 Initializing NVMe Controllers 00:20:35.014 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:35.014 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:35.014 Initialization complete. Launching workers. 00:20:35.014 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10304, failed: 0 00:20:35.014 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1026, failed to submit 9278 00:20:35.014 success 701, unsuccessful 325, failed 0 00:20:35.014 09:43:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:35.014 09:43:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:38.327 Initializing NVMe Controllers 00:20:38.327 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:38.327 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:38.327 Initialization complete. Launching workers. 00:20:38.327 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8965, failed: 0 00:20:38.327 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1163, failed to submit 7802 00:20:38.327 success 422, unsuccessful 741, failed 0 00:20:38.327 09:43:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:38.327 09:43:24 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:41.611 Initializing NVMe Controllers 00:20:41.611 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:20:41.611 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:41.611 Initialization complete. Launching workers. 00:20:41.611 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31381, failed: 0 00:20:41.611 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2292, failed to submit 29089 00:20:41.611 success 473, unsuccessful 1819, failed 0 00:20:41.611 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:20:41.611 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.612 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:41.612 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.612 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:41.612 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.612 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:42.178 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.178 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 84023 00:20:42.178 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 84023 ']' 00:20:42.178 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 84023 00:20:42.178 09:43:27 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:20:42.178 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:42.178 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84023 00:20:42.178 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:42.178 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:42.178 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84023' 00:20:42.178 killing process with pid 84023 00:20:42.178 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 84023 00:20:42.178 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 84023 00:20:42.437 00:20:42.437 real 0m10.758s 00:20:42.437 user 0m40.824s 00:20:42.437 sys 0m2.019s 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:42.437 ************************************ 00:20:42.437 END TEST spdk_target_abort 00:20:42.437 ************************************ 00:20:42.437 09:43:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:20:42.437 09:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:42.437 09:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:42.437 09:43:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:42.437 ************************************ 00:20:42.437 START TEST kernel_target_abort 00:20:42.437 ************************************ 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:42.437 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:42.695 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:42.695 Waiting for block devices as requested 00:20:42.695 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:42.954 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:42.954 No valid GPT data, bailing 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:42.954 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:43.213 No valid GPT data, bailing 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:43.213 09:43:28 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:43.213 No valid GPT data, bailing 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:43.213 No valid GPT data, bailing 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:20:43.213 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 --hostid=5243355a-262e-4d66-b861-d6387f15e8f8 -a 10.0.0.1 -t tcp -s 4420 00:20:43.214 00:20:43.214 Discovery Log Number of Records 2, Generation counter 2 00:20:43.214 =====Discovery Log Entry 0====== 00:20:43.214 trtype: tcp 00:20:43.214 adrfam: ipv4 00:20:43.214 subtype: current discovery subsystem 00:20:43.214 treq: not specified, sq flow control disable supported 00:20:43.214 portid: 1 00:20:43.214 trsvcid: 4420 00:20:43.214 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:43.214 traddr: 10.0.0.1 00:20:43.214 eflags: none 00:20:43.214 sectype: none 00:20:43.214 =====Discovery Log Entry 1====== 00:20:43.214 trtype: tcp 00:20:43.214 adrfam: ipv4 00:20:43.214 subtype: nvme subsystem 00:20:43.214 treq: not specified, sq flow control disable supported 00:20:43.214 portid: 1 00:20:43.214 trsvcid: 4420 00:20:43.214 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:43.214 traddr: 10.0.0.1 00:20:43.214 eflags: none 00:20:43.214 sectype: none 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:43.214 09:43:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:46.499 Initializing NVMe Controllers 00:20:46.499 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:46.499 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:46.499 Initialization complete. Launching workers. 00:20:46.499 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31192, failed: 0 00:20:46.499 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31192, failed to submit 0 00:20:46.499 success 0, unsuccessful 31192, failed 0 00:20:46.499 09:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:46.499 09:43:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:49.785 Initializing NVMe Controllers 00:20:49.785 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:49.785 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:49.785 Initialization complete. Launching workers. 00:20:49.785 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66601, failed: 0 00:20:49.785 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28781, failed to submit 37820 00:20:49.785 success 0, unsuccessful 28781, failed 0 00:20:49.785 09:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:49.785 09:43:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:53.070 Initializing NVMe Controllers 00:20:53.070 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:53.070 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:20:53.070 Initialization complete. Launching workers. 00:20:53.070 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 79712, failed: 0 00:20:53.070 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19924, failed to submit 59788 00:20:53.070 success 0, unsuccessful 19924, failed 0 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:20:53.070 09:43:38 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:53.637 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:55.537 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.537 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:55.537 ************************************ 00:20:55.537 END TEST kernel_target_abort 00:20:55.537 ************************************ 00:20:55.537 00:20:55.537 real 0m13.087s 00:20:55.537 user 0m6.314s 00:20:55.537 sys 0m4.281s 00:20:55.537 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:55.537 09:43:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:55.537 rmmod nvme_tcp 00:20:55.537 rmmod nvme_fabrics 00:20:55.537 rmmod nvme_keyring 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 84023 ']' 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 84023 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 84023 ']' 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 84023 00:20:55.537 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (84023) - No such process 00:20:55.537 Process with pid 84023 is not found 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 84023 is not found' 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:20:55.537 09:43:41 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:56.114 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:56.114 Waiting for block devices as requested 00:20:56.114 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:56.114 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:56.114 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:20:56.394 00:20:56.394 real 0m26.740s 00:20:56.394 user 0m48.266s 00:20:56.394 sys 0m7.701s 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:20:56.394 09:43:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:20:56.394 ************************************ 00:20:56.394 END TEST nvmf_abort_qd_sizes 00:20:56.394 ************************************ 00:20:56.394 09:43:42 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:56.394 09:43:42 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:20:56.394 09:43:42 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:20:56.394 09:43:42 -- common/autotest_common.sh@10 -- # set +x 00:20:56.394 ************************************ 00:20:56.394 START TEST keyring_file 00:20:56.394 ************************************ 00:20:56.394 09:43:42 keyring_file -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:20:56.662 * Looking for test storage... 00:20:56.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@345 -- # : 1 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@353 -- # local d=1 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@355 -- # echo 1 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@353 -- # local d=2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@355 -- # echo 2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@368 -- # return 0 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:20:56.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.662 --rc genhtml_branch_coverage=1 00:20:56.662 --rc genhtml_function_coverage=1 00:20:56.662 --rc genhtml_legend=1 00:20:56.662 --rc geninfo_all_blocks=1 00:20:56.662 --rc geninfo_unexecuted_blocks=1 00:20:56.662 00:20:56.662 ' 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:20:56.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.662 --rc genhtml_branch_coverage=1 00:20:56.662 --rc genhtml_function_coverage=1 00:20:56.662 --rc genhtml_legend=1 00:20:56.662 --rc geninfo_all_blocks=1 00:20:56.662 --rc geninfo_unexecuted_blocks=1 00:20:56.662 00:20:56.662 ' 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:20:56.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.662 --rc genhtml_branch_coverage=1 00:20:56.662 --rc genhtml_function_coverage=1 00:20:56.662 --rc genhtml_legend=1 00:20:56.662 --rc geninfo_all_blocks=1 00:20:56.662 --rc geninfo_unexecuted_blocks=1 00:20:56.662 00:20:56.662 ' 00:20:56.662 09:43:42 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:20:56.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:56.662 --rc genhtml_branch_coverage=1 00:20:56.662 --rc genhtml_function_coverage=1 00:20:56.662 --rc genhtml_legend=1 00:20:56.662 --rc geninfo_all_blocks=1 00:20:56.662 --rc geninfo_unexecuted_blocks=1 00:20:56.662 00:20:56.662 ' 00:20:56.662 09:43:42 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:20:56.662 09:43:42 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.662 09:43:42 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.662 09:43:42 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.662 09:43:42 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.662 09:43:42 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.662 09:43:42 keyring_file -- paths/export.sh@5 -- # export PATH 00:20:56.662 09:43:42 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@51 -- # : 0 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:56.662 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:56.662 09:43:42 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:56.662 09:43:42 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:20:56.662 09:43:42 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:20:56.662 09:43:42 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:20:56.662 09:43:42 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:20:56.662 09:43:42 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:20:56.662 09:43:42 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:20:56.662 09:43:42 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:20:56.662 09:43:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:56.662 09:43:42 keyring_file -- keyring/common.sh@17 -- # name=key0 00:20:56.662 09:43:42 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:20:56.662 09:43:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:56.662 09:43:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.bjChqKxsUV 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:20:56.663 09:43:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:20:56.663 09:43:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:56.663 09:43:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:56.663 09:43:42 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:20:56.663 09:43:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:56.663 09:43:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.bjChqKxsUV 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.bjChqKxsUV 00:20:56.663 09:43:42 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.bjChqKxsUV 00:20:56.663 09:43:42 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@17 -- # name=key1 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@17 -- # digest=0 00:20:56.663 09:43:42 keyring_file -- keyring/common.sh@18 -- # mktemp 00:20:56.921 09:43:42 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.TPPF3i5p4Z 00:20:56.921 09:43:42 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:20:56.921 09:43:42 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:20:56.921 09:43:42 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:20:56.921 09:43:42 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:20:56.921 09:43:42 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:20:56.922 09:43:42 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:20:56.922 09:43:42 keyring_file -- nvmf/common.sh@733 -- # python - 00:20:56.922 09:43:42 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.TPPF3i5p4Z 00:20:56.922 09:43:42 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.TPPF3i5p4Z 00:20:56.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:56.922 09:43:42 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.TPPF3i5p4Z 00:20:56.922 09:43:42 keyring_file -- keyring/file.sh@30 -- # tgtpid=84927 00:20:56.922 09:43:42 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:56.922 09:43:42 keyring_file -- keyring/file.sh@32 -- # waitforlisten 84927 00:20:56.922 09:43:42 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 84927 ']' 00:20:56.922 09:43:42 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:56.922 09:43:42 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:56.922 09:43:42 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:56.922 09:43:42 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:56.922 09:43:42 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:56.922 [2024-11-05 09:43:42.739695] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:20:56.922 [2024-11-05 09:43:42.740014] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84927 ] 00:20:57.180 [2024-11-05 09:43:42.892688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.180 [2024-11-05 09:43:42.932603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:57.180 [2024-11-05 09:43:42.979921] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.180 09:43:43 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:57.180 09:43:43 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:20:57.180 09:43:43 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:20:57.180 09:43:43 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.180 09:43:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.180 [2024-11-05 09:43:43.122638] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:57.439 null0 00:20:57.439 [2024-11-05 09:43:43.154605] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:57.439 [2024-11-05 09:43:43.154987] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.439 09:43:43 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.439 [2024-11-05 09:43:43.182594] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:20:57.439 request: 00:20:57.439 { 00:20:57.439 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:20:57.439 "secure_channel": false, 00:20:57.439 "listen_address": { 00:20:57.439 "trtype": "tcp", 00:20:57.439 "traddr": "127.0.0.1", 00:20:57.439 "trsvcid": "4420" 00:20:57.439 }, 00:20:57.439 "method": "nvmf_subsystem_add_listener", 00:20:57.439 "req_id": 1 00:20:57.439 } 00:20:57.439 Got JSON-RPC error response 00:20:57.439 response: 00:20:57.439 { 00:20:57.439 "code": -32602, 00:20:57.439 "message": "Invalid parameters" 00:20:57.439 } 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:57.439 09:43:43 keyring_file -- keyring/file.sh@47 -- # bperfpid=84931 00:20:57.439 09:43:43 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:20:57.439 09:43:43 keyring_file -- keyring/file.sh@49 -- # waitforlisten 84931 /var/tmp/bperf.sock 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 84931 ']' 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:20:57.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:57.439 09:43:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:20:57.439 [2024-11-05 09:43:43.248060] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:20:57.439 [2024-11-05 09:43:43.248152] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84931 ] 00:20:57.439 [2024-11-05 09:43:43.397942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.698 [2024-11-05 09:43:43.437325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:57.698 [2024-11-05 09:43:43.472130] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:57.698 09:43:43 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:57.698 09:43:43 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:20:57.698 09:43:43 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:20:57.698 09:43:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:20:57.957 09:43:43 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TPPF3i5p4Z 00:20:57.957 09:43:43 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TPPF3i5p4Z 00:20:58.216 09:43:44 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:20:58.216 09:43:44 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:20:58.216 09:43:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.216 09:43:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.216 09:43:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:58.474 09:43:44 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.bjChqKxsUV == \/\t\m\p\/\t\m\p\.\b\j\C\h\q\K\x\s\U\V ]] 00:20:58.474 09:43:44 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:20:58.474 09:43:44 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:20:58.474 09:43:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.474 09:43:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.474 09:43:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:58.733 09:43:44 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.TPPF3i5p4Z == \/\t\m\p\/\t\m\p\.\T\P\P\F\3\i\5\p\4\Z ]] 00:20:58.733 09:43:44 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:20:58.733 09:43:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:58.733 09:43:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:58.733 09:43:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:58.733 09:43:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.733 09:43:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:58.991 09:43:44 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:20:58.991 09:43:44 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:20:58.991 09:43:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:20:58.991 09:43:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:58.991 09:43:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:20:58.991 09:43:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:58.991 09:43:44 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:20:59.558 09:43:45 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:20:59.558 09:43:45 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:59.558 09:43:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:20:59.817 [2024-11-05 09:43:45.558476] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:59.817 nvme0n1 00:20:59.817 09:43:45 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:20:59.817 09:43:45 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:20:59.817 09:43:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:20:59.817 09:43:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:20:59.817 09:43:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:20:59.817 09:43:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.076 09:43:45 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:21:00.076 09:43:45 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:21:00.076 09:43:45 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:00.076 09:43:45 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:00.076 09:43:45 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:00.076 09:43:45 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:00.076 09:43:45 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:00.334 09:43:46 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:21:00.334 09:43:46 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:00.592 Running I/O for 1 seconds... 00:21:01.529 11381.00 IOPS, 44.46 MiB/s 00:21:01.529 Latency(us) 00:21:01.529 [2024-11-05T09:43:47.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:01.529 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:21:01.529 nvme0n1 : 1.01 11430.67 44.65 0.00 0.00 11167.56 4379.00 16801.05 00:21:01.529 [2024-11-05T09:43:47.487Z] =================================================================================================================== 00:21:01.529 [2024-11-05T09:43:47.487Z] Total : 11430.67 44.65 0.00 0.00 11167.56 4379.00 16801.05 00:21:01.529 { 00:21:01.529 "results": [ 00:21:01.529 { 00:21:01.529 "job": "nvme0n1", 00:21:01.529 "core_mask": "0x2", 00:21:01.529 "workload": "randrw", 00:21:01.529 "percentage": 50, 00:21:01.529 "status": "finished", 00:21:01.529 "queue_depth": 128, 00:21:01.529 "io_size": 4096, 00:21:01.529 "runtime": 1.006853, 00:21:01.529 "iops": 11430.665648312117, 00:21:01.529 "mibps": 44.65103768871921, 00:21:01.529 "io_failed": 0, 00:21:01.529 "io_timeout": 0, 00:21:01.529 "avg_latency_us": 11167.564968443668, 00:21:01.529 "min_latency_us": 4378.996363636364, 00:21:01.529 "max_latency_us": 16801.04727272727 00:21:01.529 } 00:21:01.529 ], 00:21:01.529 "core_count": 1 00:21:01.529 } 00:21:01.529 09:43:47 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:01.529 09:43:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:01.787 09:43:47 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:21:01.787 09:43:47 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:01.787 09:43:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:01.787 09:43:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:01.787 09:43:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:01.787 09:43:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.046 09:43:47 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:21:02.046 09:43:47 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:21:02.046 09:43:47 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:02.046 09:43:47 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:02.046 09:43:47 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:02.046 09:43:47 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:02.046 09:43:47 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.304 09:43:48 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:21:02.304 09:43:48 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:02.304 09:43:48 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:02.304 09:43:48 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:02.304 09:43:48 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:02.304 09:43:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.305 09:43:48 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:02.305 09:43:48 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:02.305 09:43:48 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:02.305 09:43:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:21:02.563 [2024-11-05 09:43:48.459913] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:02.563 [2024-11-05 09:43:48.460152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73770 (107): Transport endpoint is not connected 00:21:02.563 [2024-11-05 09:43:48.461142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a73770 (9): Bad file descriptor 00:21:02.563 [2024-11-05 09:43:48.462138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:02.563 [2024-11-05 09:43:48.462165] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:02.563 [2024-11-05 09:43:48.462178] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:02.563 [2024-11-05 09:43:48.462189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:02.563 request: 00:21:02.563 { 00:21:02.563 "name": "nvme0", 00:21:02.563 "trtype": "tcp", 00:21:02.563 "traddr": "127.0.0.1", 00:21:02.563 "adrfam": "ipv4", 00:21:02.563 "trsvcid": "4420", 00:21:02.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:02.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:02.563 "prchk_reftag": false, 00:21:02.563 "prchk_guard": false, 00:21:02.563 "hdgst": false, 00:21:02.563 "ddgst": false, 00:21:02.563 "psk": "key1", 00:21:02.563 "allow_unrecognized_csi": false, 00:21:02.563 "method": "bdev_nvme_attach_controller", 00:21:02.563 "req_id": 1 00:21:02.563 } 00:21:02.563 Got JSON-RPC error response 00:21:02.563 response: 00:21:02.563 { 00:21:02.563 "code": -5, 00:21:02.563 "message": "Input/output error" 00:21:02.563 } 00:21:02.563 09:43:48 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:02.563 09:43:48 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:02.563 09:43:48 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:02.563 09:43:48 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:02.563 09:43:48 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:21:02.563 09:43:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:02.563 09:43:48 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:02.563 09:43:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:02.563 09:43:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:02.563 09:43:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.821 09:43:48 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:21:02.821 09:43:48 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:21:02.821 09:43:48 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:02.821 09:43:48 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:02.821 09:43:48 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:02.821 09:43:48 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:02.821 09:43:48 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:03.388 09:43:49 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:21:03.388 09:43:49 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:21:03.388 09:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:03.646 09:43:49 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:21:03.646 09:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:21:03.906 09:43:49 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:21:03.906 09:43:49 keyring_file -- keyring/file.sh@78 -- # jq length 00:21:03.906 09:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.164 09:43:49 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:21:04.164 09:43:49 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.bjChqKxsUV 00:21:04.164 09:43:49 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:21:04.164 09:43:49 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:04.164 09:43:49 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:21:04.164 09:43:49 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:04.164 09:43:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.164 09:43:49 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:04.164 09:43:49 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.164 09:43:49 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:21:04.164 09:43:49 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:21:04.422 [2024-11-05 09:43:50.224417] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.bjChqKxsUV': 0100660 00:21:04.422 [2024-11-05 09:43:50.224497] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:04.422 request: 00:21:04.422 { 00:21:04.422 "name": "key0", 00:21:04.422 "path": "/tmp/tmp.bjChqKxsUV", 00:21:04.422 "method": "keyring_file_add_key", 00:21:04.422 "req_id": 1 00:21:04.422 } 00:21:04.422 Got JSON-RPC error response 00:21:04.422 response: 00:21:04.422 { 00:21:04.422 "code": -1, 00:21:04.422 "message": "Operation not permitted" 00:21:04.422 } 00:21:04.422 09:43:50 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:04.422 09:43:50 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:04.422 09:43:50 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:04.422 09:43:50 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:04.422 09:43:50 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.bjChqKxsUV 00:21:04.422 09:43:50 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:21:04.422 09:43:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.bjChqKxsUV 00:21:04.681 09:43:50 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.bjChqKxsUV 00:21:04.681 09:43:50 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:21:04.681 09:43:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:04.681 09:43:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:04.681 09:43:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:04.681 09:43:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:04.681 09:43:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:04.940 09:43:50 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:21:04.940 09:43:50 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.940 09:43:50 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:21:04.940 09:43:50 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.940 09:43:50 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:04.940 09:43:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.940 09:43:50 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:04.940 09:43:50 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:04.940 09:43:50 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:04.940 09:43:50 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.199 [2024-11-05 09:43:51.024573] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.bjChqKxsUV': No such file or directory 00:21:05.199 [2024-11-05 09:43:51.024618] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:21:05.199 [2024-11-05 09:43:51.024656] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:21:05.199 [2024-11-05 09:43:51.024665] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:21:05.199 [2024-11-05 09:43:51.024674] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:05.199 [2024-11-05 09:43:51.024682] bdev_nvme.c:6667:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:21:05.199 request: 00:21:05.199 { 00:21:05.199 "name": "nvme0", 00:21:05.199 "trtype": "tcp", 00:21:05.199 "traddr": "127.0.0.1", 00:21:05.199 "adrfam": "ipv4", 00:21:05.199 "trsvcid": "4420", 00:21:05.199 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:05.199 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:05.199 "prchk_reftag": false, 00:21:05.199 "prchk_guard": false, 00:21:05.199 "hdgst": false, 00:21:05.199 "ddgst": false, 00:21:05.199 "psk": "key0", 00:21:05.199 "allow_unrecognized_csi": false, 00:21:05.199 "method": "bdev_nvme_attach_controller", 00:21:05.199 "req_id": 1 00:21:05.199 } 00:21:05.199 Got JSON-RPC error response 00:21:05.199 response: 00:21:05.199 { 00:21:05.199 "code": -19, 00:21:05.199 "message": "No such device" 00:21:05.199 } 00:21:05.199 09:43:51 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:21:05.199 09:43:51 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:05.199 09:43:51 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:05.199 09:43:51 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:05.199 09:43:51 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:21:05.199 09:43:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:05.458 09:43:51 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@17 -- # name=key0 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@17 -- # digest=0 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@18 -- # mktemp 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.3poZ6AD9ix 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:05.458 09:43:51 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:05.458 09:43:51 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:21:05.458 09:43:51 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:05.458 09:43:51 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:05.458 09:43:51 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:21:05.458 09:43:51 keyring_file -- nvmf/common.sh@733 -- # python - 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3poZ6AD9ix 00:21:05.458 09:43:51 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.3poZ6AD9ix 00:21:05.458 09:43:51 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.3poZ6AD9ix 00:21:05.458 09:43:51 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3poZ6AD9ix 00:21:05.459 09:43:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3poZ6AD9ix 00:21:05.717 09:43:51 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:05.717 09:43:51 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:06.285 nvme0n1 00:21:06.285 09:43:52 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:21:06.285 09:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:06.285 09:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:06.285 09:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:06.285 09:43:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:06.285 09:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:06.543 09:43:52 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:21:06.543 09:43:52 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:21:06.543 09:43:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:21:06.801 09:43:52 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:21:06.801 09:43:52 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:21:06.801 09:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:06.801 09:43:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:06.801 09:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.059 09:43:52 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:21:07.059 09:43:52 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:21:07.059 09:43:52 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:07.059 09:43:52 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:07.059 09:43:52 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:07.059 09:43:52 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.059 09:43:52 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:07.318 09:43:53 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:21:07.318 09:43:53 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:07.318 09:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:07.576 09:43:53 keyring_file -- keyring/file.sh@105 -- # jq length 00:21:07.576 09:43:53 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:21:07.576 09:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:07.834 09:43:53 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:21:07.834 09:43:53 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.3poZ6AD9ix 00:21:07.834 09:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.3poZ6AD9ix 00:21:08.095 09:43:53 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.TPPF3i5p4Z 00:21:08.096 09:43:53 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.TPPF3i5p4Z 00:21:08.379 09:43:54 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.379 09:43:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:21:08.643 nvme0n1 00:21:08.902 09:43:54 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:21:08.902 09:43:54 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:21:09.160 09:43:54 keyring_file -- keyring/file.sh@113 -- # config='{ 00:21:09.160 "subsystems": [ 00:21:09.160 { 00:21:09.160 "subsystem": "keyring", 00:21:09.160 "config": [ 00:21:09.160 { 00:21:09.160 "method": "keyring_file_add_key", 00:21:09.160 "params": { 00:21:09.160 "name": "key0", 00:21:09.160 "path": "/tmp/tmp.3poZ6AD9ix" 00:21:09.160 } 00:21:09.160 }, 00:21:09.160 { 00:21:09.160 "method": "keyring_file_add_key", 00:21:09.160 "params": { 00:21:09.160 "name": "key1", 00:21:09.160 "path": "/tmp/tmp.TPPF3i5p4Z" 00:21:09.160 } 00:21:09.160 } 00:21:09.160 ] 00:21:09.160 }, 00:21:09.160 { 00:21:09.160 "subsystem": "iobuf", 00:21:09.160 "config": [ 00:21:09.160 { 00:21:09.160 "method": "iobuf_set_options", 00:21:09.160 "params": { 00:21:09.160 "small_pool_count": 8192, 00:21:09.160 "large_pool_count": 1024, 00:21:09.160 "small_bufsize": 8192, 00:21:09.160 "large_bufsize": 135168, 00:21:09.160 "enable_numa": false 00:21:09.160 } 00:21:09.160 } 00:21:09.160 ] 00:21:09.160 }, 00:21:09.160 { 00:21:09.160 "subsystem": "sock", 00:21:09.160 "config": [ 00:21:09.160 { 00:21:09.160 "method": "sock_set_default_impl", 00:21:09.160 "params": { 00:21:09.160 "impl_name": "uring" 00:21:09.160 } 00:21:09.160 }, 00:21:09.160 { 00:21:09.160 "method": "sock_impl_set_options", 00:21:09.161 "params": { 00:21:09.161 "impl_name": "ssl", 00:21:09.161 "recv_buf_size": 4096, 00:21:09.161 "send_buf_size": 4096, 00:21:09.161 "enable_recv_pipe": true, 00:21:09.161 "enable_quickack": false, 00:21:09.161 "enable_placement_id": 0, 00:21:09.161 "enable_zerocopy_send_server": true, 00:21:09.161 "enable_zerocopy_send_client": false, 00:21:09.161 "zerocopy_threshold": 0, 00:21:09.161 "tls_version": 0, 00:21:09.161 "enable_ktls": false 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "sock_impl_set_options", 00:21:09.161 "params": { 00:21:09.161 "impl_name": "posix", 00:21:09.161 "recv_buf_size": 2097152, 00:21:09.161 "send_buf_size": 2097152, 00:21:09.161 "enable_recv_pipe": true, 00:21:09.161 "enable_quickack": false, 00:21:09.161 "enable_placement_id": 0, 00:21:09.161 "enable_zerocopy_send_server": true, 00:21:09.161 "enable_zerocopy_send_client": false, 00:21:09.161 "zerocopy_threshold": 0, 00:21:09.161 "tls_version": 0, 00:21:09.161 "enable_ktls": false 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "sock_impl_set_options", 00:21:09.161 "params": { 00:21:09.161 "impl_name": "uring", 00:21:09.161 "recv_buf_size": 2097152, 00:21:09.161 "send_buf_size": 2097152, 00:21:09.161 "enable_recv_pipe": true, 00:21:09.161 "enable_quickack": false, 00:21:09.161 "enable_placement_id": 0, 00:21:09.161 "enable_zerocopy_send_server": false, 00:21:09.161 "enable_zerocopy_send_client": false, 00:21:09.161 "zerocopy_threshold": 0, 00:21:09.161 "tls_version": 0, 00:21:09.161 "enable_ktls": false 00:21:09.161 } 00:21:09.161 } 00:21:09.161 ] 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "subsystem": "vmd", 00:21:09.161 "config": [] 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "subsystem": "accel", 00:21:09.161 "config": [ 00:21:09.161 { 00:21:09.161 "method": "accel_set_options", 00:21:09.161 "params": { 00:21:09.161 "small_cache_size": 128, 00:21:09.161 "large_cache_size": 16, 00:21:09.161 "task_count": 2048, 00:21:09.161 "sequence_count": 2048, 00:21:09.161 "buf_count": 2048 00:21:09.161 } 00:21:09.161 } 00:21:09.161 ] 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "subsystem": "bdev", 00:21:09.161 "config": [ 00:21:09.161 { 00:21:09.161 "method": "bdev_set_options", 00:21:09.161 "params": { 00:21:09.161 "bdev_io_pool_size": 65535, 00:21:09.161 "bdev_io_cache_size": 256, 00:21:09.161 "bdev_auto_examine": true, 00:21:09.161 "iobuf_small_cache_size": 128, 00:21:09.161 "iobuf_large_cache_size": 16 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "bdev_raid_set_options", 00:21:09.161 "params": { 00:21:09.161 "process_window_size_kb": 1024, 00:21:09.161 "process_max_bandwidth_mb_sec": 0 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "bdev_iscsi_set_options", 00:21:09.161 "params": { 00:21:09.161 "timeout_sec": 30 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "bdev_nvme_set_options", 00:21:09.161 "params": { 00:21:09.161 "action_on_timeout": "none", 00:21:09.161 "timeout_us": 0, 00:21:09.161 "timeout_admin_us": 0, 00:21:09.161 "keep_alive_timeout_ms": 10000, 00:21:09.161 "arbitration_burst": 0, 00:21:09.161 "low_priority_weight": 0, 00:21:09.161 "medium_priority_weight": 0, 00:21:09.161 "high_priority_weight": 0, 00:21:09.161 "nvme_adminq_poll_period_us": 10000, 00:21:09.161 "nvme_ioq_poll_period_us": 0, 00:21:09.161 "io_queue_requests": 512, 00:21:09.161 "delay_cmd_submit": true, 00:21:09.161 "transport_retry_count": 4, 00:21:09.161 "bdev_retry_count": 3, 00:21:09.161 "transport_ack_timeout": 0, 00:21:09.161 "ctrlr_loss_timeout_sec": 0, 00:21:09.161 "reconnect_delay_sec": 0, 00:21:09.161 "fast_io_fail_timeout_sec": 0, 00:21:09.161 "disable_auto_failback": false, 00:21:09.161 "generate_uuids": false, 00:21:09.161 "transport_tos": 0, 00:21:09.161 "nvme_error_stat": false, 00:21:09.161 "rdma_srq_size": 0, 00:21:09.161 "io_path_stat": false, 00:21:09.161 "allow_accel_sequence": false, 00:21:09.161 "rdma_max_cq_size": 0, 00:21:09.161 "rdma_cm_event_timeout_ms": 0, 00:21:09.161 "dhchap_digests": [ 00:21:09.161 "sha256", 00:21:09.161 "sha384", 00:21:09.161 "sha512" 00:21:09.161 ], 00:21:09.161 "dhchap_dhgroups": [ 00:21:09.161 "null", 00:21:09.161 "ffdhe2048", 00:21:09.161 "ffdhe3072", 00:21:09.161 "ffdhe4096", 00:21:09.161 "ffdhe6144", 00:21:09.161 "ffdhe8192" 00:21:09.161 ] 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "bdev_nvme_attach_controller", 00:21:09.161 "params": { 00:21:09.161 "name": "nvme0", 00:21:09.161 "trtype": "TCP", 00:21:09.161 "adrfam": "IPv4", 00:21:09.161 "traddr": "127.0.0.1", 00:21:09.161 "trsvcid": "4420", 00:21:09.161 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:09.161 "prchk_reftag": false, 00:21:09.161 "prchk_guard": false, 00:21:09.161 "ctrlr_loss_timeout_sec": 0, 00:21:09.161 "reconnect_delay_sec": 0, 00:21:09.161 "fast_io_fail_timeout_sec": 0, 00:21:09.161 "psk": "key0", 00:21:09.161 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:09.161 "hdgst": false, 00:21:09.161 "ddgst": false, 00:21:09.161 "multipath": "multipath" 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "bdev_nvme_set_hotplug", 00:21:09.161 "params": { 00:21:09.161 "period_us": 100000, 00:21:09.161 "enable": false 00:21:09.161 } 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "method": "bdev_wait_for_examine" 00:21:09.161 } 00:21:09.161 ] 00:21:09.161 }, 00:21:09.161 { 00:21:09.161 "subsystem": "nbd", 00:21:09.161 "config": [] 00:21:09.161 } 00:21:09.161 ] 00:21:09.161 }' 00:21:09.161 09:43:54 keyring_file -- keyring/file.sh@115 -- # killprocess 84931 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 84931 ']' 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@956 -- # kill -0 84931 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84931 00:21:09.161 killing process with pid 84931 00:21:09.161 Received shutdown signal, test time was about 1.000000 seconds 00:21:09.161 00:21:09.161 Latency(us) 00:21:09.161 [2024-11-05T09:43:55.119Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:09.161 [2024-11-05T09:43:55.119Z] =================================================================================================================== 00:21:09.161 [2024-11-05T09:43:55.119Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84931' 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@971 -- # kill 84931 00:21:09.161 09:43:54 keyring_file -- common/autotest_common.sh@976 -- # wait 84931 00:21:09.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:09.161 09:43:55 keyring_file -- keyring/file.sh@118 -- # bperfpid=85185 00:21:09.161 09:43:55 keyring_file -- keyring/file.sh@120 -- # waitforlisten 85185 /var/tmp/bperf.sock 00:21:09.161 09:43:55 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 85185 ']' 00:21:09.161 09:43:55 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:09.161 09:43:55 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:21:09.161 09:43:55 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:09.161 09:43:55 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:09.161 09:43:55 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:09.161 09:43:55 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:21:09.161 "subsystems": [ 00:21:09.161 { 00:21:09.161 "subsystem": "keyring", 00:21:09.161 "config": [ 00:21:09.161 { 00:21:09.161 "method": "keyring_file_add_key", 00:21:09.161 "params": { 00:21:09.161 "name": "key0", 00:21:09.161 "path": "/tmp/tmp.3poZ6AD9ix" 00:21:09.161 } 00:21:09.161 }, 00:21:09.162 { 00:21:09.162 "method": "keyring_file_add_key", 00:21:09.162 "params": { 00:21:09.162 "name": "key1", 00:21:09.162 "path": "/tmp/tmp.TPPF3i5p4Z" 00:21:09.162 } 00:21:09.162 } 00:21:09.162 ] 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "subsystem": "iobuf", 00:21:09.162 "config": [ 00:21:09.162 { 00:21:09.162 "method": "iobuf_set_options", 00:21:09.162 "params": { 00:21:09.162 "small_pool_count": 8192, 00:21:09.162 "large_pool_count": 1024, 00:21:09.162 "small_bufsize": 8192, 00:21:09.162 "large_bufsize": 135168, 00:21:09.162 "enable_numa": false 00:21:09.162 } 00:21:09.162 } 00:21:09.162 ] 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "subsystem": "sock", 00:21:09.162 "config": [ 00:21:09.162 { 00:21:09.162 "method": "sock_set_default_impl", 00:21:09.162 "params": { 00:21:09.162 "impl_name": "uring" 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "sock_impl_set_options", 00:21:09.162 "params": { 00:21:09.162 "impl_name": "ssl", 00:21:09.162 "recv_buf_size": 4096, 00:21:09.162 "send_buf_size": 4096, 00:21:09.162 "enable_recv_pipe": true, 00:21:09.162 "enable_quickack": false, 00:21:09.162 "enable_placement_id": 0, 00:21:09.162 "enable_zerocopy_send_server": true, 00:21:09.162 "enable_zerocopy_send_client": false, 00:21:09.162 "zerocopy_threshold": 0, 00:21:09.162 "tls_version": 0, 00:21:09.162 "enable_ktls": false 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "sock_impl_set_options", 00:21:09.162 "params": { 00:21:09.162 "impl_name": "posix", 00:21:09.162 "recv_buf_size": 2097152, 00:21:09.162 "send_buf_size": 2097152, 00:21:09.162 "enable_recv_pipe": true, 00:21:09.162 "enable_quickack": false, 00:21:09.162 "enable_placement_id": 0, 00:21:09.162 "enable_zerocopy_send_server": true, 00:21:09.162 "enable_zerocopy_send_client": false, 00:21:09.162 "zerocopy_threshold": 0, 00:21:09.162 "tls_version": 0, 00:21:09.162 "enable_ktls": false 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "sock_impl_set_options", 00:21:09.162 "params": { 00:21:09.162 "impl_name": "uring", 00:21:09.162 "recv_buf_size": 2097152, 00:21:09.162 "send_buf_size": 2097152, 00:21:09.162 "enable_recv_pipe": true, 00:21:09.162 "enable_quickack": false, 00:21:09.162 "enable_placement_id": 0, 00:21:09.162 "enable_zerocopy_send_server": false, 00:21:09.162 "enable_zerocopy_send_client": false, 00:21:09.162 "zerocopy_threshold": 0, 00:21:09.162 "tls_version": 0, 00:21:09.162 "enable_ktls": false 00:21:09.162 } 00:21:09.162 } 00:21:09.162 ] 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "subsystem": "vmd", 00:21:09.162 "config": [] 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "subsystem": "accel", 00:21:09.162 "config": [ 00:21:09.162 { 00:21:09.162 "method": "accel_set_options", 00:21:09.162 "params": { 00:21:09.162 "small_cache_size": 128, 00:21:09.162 "large_cache_size": 16, 00:21:09.162 "task_count": 2048, 00:21:09.162 "sequence_count": 2048, 00:21:09.162 "buf_count": 2048 00:21:09.162 } 00:21:09.162 } 00:21:09.162 ] 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "subsystem": "bdev", 00:21:09.162 "config": [ 00:21:09.162 { 00:21:09.162 "method": "bdev_set_options", 00:21:09.162 "params": { 00:21:09.162 "bdev_io_pool_size": 65535, 00:21:09.162 "bdev_io_cache_size": 256, 00:21:09.162 "bdev_auto_examine": true, 00:21:09.162 "iobuf_small_cache_size": 128, 00:21:09.162 "iobuf_large_cache_size": 16 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "bdev_raid_set_options", 00:21:09.162 "params": { 00:21:09.162 "process_window_size_kb": 1024, 00:21:09.162 "process_max_bandwidth_mb_sec": 0 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "bdev_iscsi_set_options", 00:21:09.162 "params": { 00:21:09.162 "timeout_sec": 30 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "bdev_nvme_set_options", 00:21:09.162 "params": { 00:21:09.162 "action_on_timeout": "none", 00:21:09.162 "timeout_us": 0, 00:21:09.162 "timeout_admin_us": 0, 00:21:09.162 "keep_alive_timeout_ms": 10000, 00:21:09.162 "arbitration_burst": 0, 00:21:09.162 "low_priority_weight": 0, 00:21:09.162 "medium_priority_weight": 0, 00:21:09.162 "high_priority_weight": 0, 00:21:09.162 "nvme_adminq_poll_period_us": 10000, 00:21:09.162 "nvme_ioq_poll_period_us": 0, 00:21:09.162 "io_queue_requests": 512, 00:21:09.162 "delay_cmd_submit": true, 00:21:09.162 "transport_retry_count": 4, 00:21:09.162 "bdev_retry_count": 3, 00:21:09.162 "transport_ack_timeout": 0, 00:21:09.162 "ctrlr_loss_timeout_sec": 0, 00:21:09.162 "reconnect_delay_sec": 0, 00:21:09.162 "fast_io_fail_timeout_sec": 0, 00:21:09.162 "disable_auto_failback": false, 00:21:09.162 "generate_uuids": false, 00:21:09.162 "transport_tos": 0, 00:21:09.162 "nvme_error_stat": false, 00:21:09.162 "rdma_srq_size": 0, 00:21:09.162 "io_path_stat": false, 00:21:09.162 "allow_accel_sequence": false, 00:21:09.162 "rdma_max_cq_size": 0, 00:21:09.162 "rdma_cm_event_timeout_ms": 0, 00:21:09.162 "dhchap_digests": [ 00:21:09.162 "sha256", 00:21:09.162 "sha384", 00:21:09.162 "sha512" 00:21:09.162 ], 00:21:09.162 "dhchap_dhgroups": [ 00:21:09.162 "null", 00:21:09.162 "ffdhe2048", 00:21:09.162 "ffdhe3072", 00:21:09.162 "ffdhe4096", 00:21:09.162 "ffdhe6144", 00:21:09.162 "ffdhe8192" 00:21:09.162 ] 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "bdev_nvme_attach_controller", 00:21:09.162 "params": { 00:21:09.162 "name": "nvme0", 00:21:09.162 "trtype": "TCP", 00:21:09.162 "adrfam": "IPv4", 00:21:09.162 "traddr": "127.0.0.1", 00:21:09.162 "trsvcid": "4420", 00:21:09.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:09.162 "prchk_reftag": false, 00:21:09.162 "prchk_guard": false, 00:21:09.162 "ctrlr_loss_timeout_sec": 0, 00:21:09.162 "reconnect_delay_sec": 0, 00:21:09.162 "fast_io_fail_timeout_sec": 0, 00:21:09.162 "psk": "key0", 00:21:09.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:09.162 "hdgst": false, 00:21:09.162 "ddgst": false, 00:21:09.162 "multipath": "multipath" 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "bdev_nvme_set_hotplug", 00:21:09.162 "params": { 00:21:09.162 "period_us": 100000, 00:21:09.162 "enable": false 00:21:09.162 } 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "method": "bdev_wait_for_examine" 00:21:09.162 } 00:21:09.162 ] 00:21:09.162 }, 00:21:09.162 { 00:21:09.162 "subsystem": "nbd", 00:21:09.162 "config": [] 00:21:09.162 } 00:21:09.162 ] 00:21:09.162 }' 00:21:09.162 09:43:55 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:09.421 [2024-11-05 09:43:55.168865] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:21:09.421 [2024-11-05 09:43:55.169187] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85185 ] 00:21:09.421 [2024-11-05 09:43:55.319865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:09.421 [2024-11-05 09:43:55.353543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:09.680 [2024-11-05 09:43:55.466054] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:09.680 [2024-11-05 09:43:55.506836] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:10.246 09:43:56 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:10.246 09:43:56 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:21:10.246 09:43:56 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:21:10.246 09:43:56 keyring_file -- keyring/file.sh@121 -- # jq length 00:21:10.246 09:43:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:10.812 09:43:56 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:21:10.812 09:43:56 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:21:10.812 09:43:56 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:21:10.812 09:43:56 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:21:10.812 09:43:56 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:11.377 09:43:57 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:21:11.378 09:43:57 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:21:11.378 09:43:57 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:21:11.378 09:43:57 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:21:11.378 09:43:57 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:21:11.378 09:43:57 keyring_file -- keyring/file.sh@1 -- # cleanup 00:21:11.378 09:43:57 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.3poZ6AD9ix /tmp/tmp.TPPF3i5p4Z 00:21:11.378 09:43:57 keyring_file -- keyring/file.sh@20 -- # killprocess 85185 00:21:11.378 09:43:57 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 85185 ']' 00:21:11.378 09:43:57 keyring_file -- common/autotest_common.sh@956 -- # kill -0 85185 00:21:11.378 09:43:57 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:11.378 09:43:57 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:11.378 09:43:57 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85185 00:21:11.636 killing process with pid 85185 00:21:11.636 Received shutdown signal, test time was about 1.000000 seconds 00:21:11.636 00:21:11.636 Latency(us) 00:21:11.636 [2024-11-05T09:43:57.594Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:11.636 [2024-11-05T09:43:57.594Z] =================================================================================================================== 00:21:11.636 [2024-11-05T09:43:57.594Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85185' 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@971 -- # kill 85185 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@976 -- # wait 85185 00:21:11.636 09:43:57 keyring_file -- keyring/file.sh@21 -- # killprocess 84927 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 84927 ']' 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@956 -- # kill -0 84927 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@957 -- # uname 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 84927 00:21:11.636 killing process with pid 84927 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 84927' 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@971 -- # kill 84927 00:21:11.636 09:43:57 keyring_file -- common/autotest_common.sh@976 -- # wait 84927 00:21:11.895 00:21:11.895 real 0m15.426s 00:21:11.895 user 0m40.278s 00:21:11.895 sys 0m2.715s 00:21:11.895 09:43:57 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:11.895 09:43:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:21:11.895 ************************************ 00:21:11.895 END TEST keyring_file 00:21:11.895 ************************************ 00:21:11.895 09:43:57 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:21:11.895 09:43:57 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:11.895 09:43:57 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:11.895 09:43:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:11.895 09:43:57 -- common/autotest_common.sh@10 -- # set +x 00:21:11.895 ************************************ 00:21:11.895 START TEST keyring_linux 00:21:11.895 ************************************ 00:21:11.895 09:43:57 keyring_linux -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:21:11.895 Joined session keyring: 895736989 00:21:12.154 * Looking for test storage... 00:21:12.154 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@345 -- # : 1 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:12.154 09:43:57 keyring_linux -- scripts/common.sh@368 -- # return 0 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:12.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.154 --rc genhtml_branch_coverage=1 00:21:12.154 --rc genhtml_function_coverage=1 00:21:12.154 --rc genhtml_legend=1 00:21:12.154 --rc geninfo_all_blocks=1 00:21:12.154 --rc geninfo_unexecuted_blocks=1 00:21:12.154 00:21:12.154 ' 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:12.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.154 --rc genhtml_branch_coverage=1 00:21:12.154 --rc genhtml_function_coverage=1 00:21:12.154 --rc genhtml_legend=1 00:21:12.154 --rc geninfo_all_blocks=1 00:21:12.154 --rc geninfo_unexecuted_blocks=1 00:21:12.154 00:21:12.154 ' 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:12.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.154 --rc genhtml_branch_coverage=1 00:21:12.154 --rc genhtml_function_coverage=1 00:21:12.154 --rc genhtml_legend=1 00:21:12.154 --rc geninfo_all_blocks=1 00:21:12.154 --rc geninfo_unexecuted_blocks=1 00:21:12.154 00:21:12.154 ' 00:21:12.154 09:43:57 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:12.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:12.154 --rc genhtml_branch_coverage=1 00:21:12.154 --rc genhtml_function_coverage=1 00:21:12.154 --rc genhtml_legend=1 00:21:12.154 --rc geninfo_all_blocks=1 00:21:12.154 --rc geninfo_unexecuted_blocks=1 00:21:12.154 00:21:12.154 ' 00:21:12.154 09:43:57 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:21:12.154 09:43:57 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:12.154 09:43:58 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5243355a-262e-4d66-b861-d6387f15e8f8 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5243355a-262e-4d66-b861-d6387f15e8f8 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:12.155 09:43:58 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:21:12.155 09:43:58 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:12.155 09:43:58 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:12.155 09:43:58 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:12.155 09:43:58 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.155 09:43:58 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.155 09:43:58 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.155 09:43:58 keyring_linux -- paths/export.sh@5 -- # export PATH 00:21:12.155 09:43:58 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:12.155 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:21:12.155 09:43:58 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:21:12.155 09:43:58 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:21:12.155 09:43:58 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:21:12.155 09:43:58 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:21:12.155 09:43:58 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:21:12.155 09:43:58 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:21:12.155 /tmp/:spdk-test:key0 00:21:12.155 09:43:58 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:21:12.155 09:43:58 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:21:12.155 09:43:58 keyring_linux -- nvmf/common.sh@733 -- # python - 00:21:12.413 09:43:58 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:21:12.413 /tmp/:spdk-test:key1 00:21:12.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.413 09:43:58 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:21:12.413 09:43:58 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=85308 00:21:12.413 09:43:58 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 85308 00:21:12.413 09:43:58 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85308 ']' 00:21:12.414 09:43:58 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:21:12.414 09:43:58 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.414 09:43:58 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:12.414 09:43:58 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.414 09:43:58 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:12.414 09:43:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:12.414 [2024-11-05 09:43:58.188801] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:21:12.414 [2024-11-05 09:43:58.189073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85308 ] 00:21:12.414 [2024-11-05 09:43:58.332853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.414 [2024-11-05 09:43:58.366191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.672 [2024-11-05 09:43:58.407842] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:12.672 09:43:58 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:12.672 [2024-11-05 09:43:58.549133] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.672 null0 00:21:12.672 [2024-11-05 09:43:58.581115] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:12.672 [2024-11-05 09:43:58.581295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.672 09:43:58 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:21:12.672 1056116913 00:21:12.672 09:43:58 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:21:12.672 703715435 00:21:12.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:12.672 09:43:58 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=85323 00:21:12.672 09:43:58 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:21:12.672 09:43:58 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 85323 /var/tmp/bperf.sock 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 85323 ']' 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:12.672 09:43:58 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:12.930 [2024-11-05 09:43:58.662408] Starting SPDK v25.01-pre git sha1 6b98809f9 / DPDK 24.03.0 initialization... 00:21:12.930 [2024-11-05 09:43:58.662671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85323 ] 00:21:12.930 [2024-11-05 09:43:58.814701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.930 [2024-11-05 09:43:58.847528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.186 09:43:58 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:13.186 09:43:58 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:21:13.186 09:43:58 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:21:13.186 09:43:58 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:21:13.443 09:43:59 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:21:13.443 09:43:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:13.701 [2024-11-05 09:43:59.414846] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:13.701 09:43:59 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:13.701 09:43:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:21:13.958 [2024-11-05 09:43:59.746318] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.958 nvme0n1 00:21:13.958 09:43:59 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:21:13.958 09:43:59 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:21:13.958 09:43:59 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:13.958 09:43:59 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:13.958 09:43:59 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:13.958 09:43:59 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.216 09:44:00 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:21:14.216 09:44:00 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:14.216 09:44:00 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:21:14.216 09:44:00 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:21:14.216 09:44:00 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:21:14.216 09:44:00 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:14.216 09:44:00 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:21:14.475 09:44:00 keyring_linux -- keyring/linux.sh@25 -- # sn=1056116913 00:21:14.475 09:44:00 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:21:14.475 09:44:00 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:14.475 09:44:00 keyring_linux -- keyring/linux.sh@26 -- # [[ 1056116913 == \1\0\5\6\1\1\6\9\1\3 ]] 00:21:14.475 09:44:00 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1056116913 00:21:14.475 09:44:00 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:21:14.475 09:44:00 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:14.732 Running I/O for 1 seconds... 00:21:15.667 12848.00 IOPS, 50.19 MiB/s 00:21:15.667 Latency(us) 00:21:15.667 [2024-11-05T09:44:01.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.667 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:15.667 nvme0n1 : 1.01 12851.63 50.20 0.00 0.00 9906.43 6166.34 16324.42 00:21:15.667 [2024-11-05T09:44:01.625Z] =================================================================================================================== 00:21:15.667 [2024-11-05T09:44:01.625Z] Total : 12851.63 50.20 0.00 0.00 9906.43 6166.34 16324.42 00:21:15.667 { 00:21:15.667 "results": [ 00:21:15.667 { 00:21:15.667 "job": "nvme0n1", 00:21:15.667 "core_mask": "0x2", 00:21:15.667 "workload": "randread", 00:21:15.667 "status": "finished", 00:21:15.667 "queue_depth": 128, 00:21:15.667 "io_size": 4096, 00:21:15.667 "runtime": 1.009755, 00:21:15.667 "iops": 12851.632326653495, 00:21:15.667 "mibps": 50.20168877599021, 00:21:15.667 "io_failed": 0, 00:21:15.667 "io_timeout": 0, 00:21:15.667 "avg_latency_us": 9906.431392043265, 00:21:15.667 "min_latency_us": 6166.341818181818, 00:21:15.667 "max_latency_us": 16324.421818181818 00:21:15.667 } 00:21:15.667 ], 00:21:15.667 "core_count": 1 00:21:15.667 } 00:21:15.667 09:44:01 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:21:15.667 09:44:01 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:21:16.232 09:44:01 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:21:16.232 09:44:01 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:21:16.232 09:44:01 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:21:16.232 09:44:01 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:21:16.232 09:44:01 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:21:16.232 09:44:01 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:21:16.490 09:44:02 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:21:16.490 09:44:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:21:16.490 09:44:02 keyring_linux -- keyring/linux.sh@23 -- # return 00:21:16.490 09:44:02 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:16.490 09:44:02 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:21:16.490 09:44:02 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:16.490 09:44:02 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:21:16.490 09:44:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.490 09:44:02 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:21:16.490 09:44:02 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:16.490 09:44:02 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:16.490 09:44:02 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:21:16.749 [2024-11-05 09:44:02.454808] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:16.749 [2024-11-05 09:44:02.455464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb215d0 (107): Transport endpoint is not connected 00:21:16.749 [2024-11-05 09:44:02.456451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb215d0 (9): Bad file descriptor 00:21:16.749 [2024-11-05 09:44:02.457448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:21:16.749 [2024-11-05 09:44:02.457489] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:21:16.749 [2024-11-05 09:44:02.457499] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:21:16.749 [2024-11-05 09:44:02.457526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:21:16.749 request: 00:21:16.749 { 00:21:16.749 "name": "nvme0", 00:21:16.749 "trtype": "tcp", 00:21:16.749 "traddr": "127.0.0.1", 00:21:16.749 "adrfam": "ipv4", 00:21:16.749 "trsvcid": "4420", 00:21:16.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:16.749 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:16.749 "prchk_reftag": false, 00:21:16.749 "prchk_guard": false, 00:21:16.749 "hdgst": false, 00:21:16.749 "ddgst": false, 00:21:16.749 "psk": ":spdk-test:key1", 00:21:16.749 "allow_unrecognized_csi": false, 00:21:16.749 "method": "bdev_nvme_attach_controller", 00:21:16.749 "req_id": 1 00:21:16.749 } 00:21:16.749 Got JSON-RPC error response 00:21:16.749 response: 00:21:16.749 { 00:21:16.749 "code": -5, 00:21:16.749 "message": "Input/output error" 00:21:16.749 } 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@33 -- # sn=1056116913 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1056116913 00:21:16.749 1 links removed 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@33 -- # sn=703715435 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 703715435 00:21:16.749 1 links removed 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@41 -- # killprocess 85323 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85323 ']' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85323 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85323 00:21:16.749 killing process with pid 85323 00:21:16.749 Received shutdown signal, test time was about 1.000000 seconds 00:21:16.749 00:21:16.749 Latency(us) 00:21:16.749 [2024-11-05T09:44:02.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.749 [2024-11-05T09:44:02.707Z] =================================================================================================================== 00:21:16.749 [2024-11-05T09:44:02.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85323' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@971 -- # kill 85323 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@976 -- # wait 85323 00:21:16.749 09:44:02 keyring_linux -- keyring/linux.sh@42 -- # killprocess 85308 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 85308 ']' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 85308 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 85308 00:21:16.749 killing process with pid 85308 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 85308' 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@971 -- # kill 85308 00:21:16.749 09:44:02 keyring_linux -- common/autotest_common.sh@976 -- # wait 85308 00:21:17.008 00:21:17.008 real 0m5.123s 00:21:17.008 user 0m10.606s 00:21:17.008 sys 0m1.293s 00:21:17.008 09:44:02 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:17.008 09:44:02 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:21:17.008 ************************************ 00:21:17.008 END TEST keyring_linux 00:21:17.008 ************************************ 00:21:17.266 09:44:02 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:21:17.266 09:44:02 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:21:17.266 09:44:02 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:21:17.266 09:44:02 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:21:17.266 09:44:02 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:21:17.266 09:44:02 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:21:17.266 09:44:02 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:21:17.266 09:44:02 -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:17.266 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:21:17.266 09:44:02 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:21:17.266 09:44:02 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:21:17.266 09:44:02 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:21:17.266 09:44:02 -- common/autotest_common.sh@10 -- # set +x 00:21:19.170 INFO: APP EXITING 00:21:19.170 INFO: killing all VMs 00:21:19.170 INFO: killing vhost app 00:21:19.170 INFO: EXIT DONE 00:21:19.429 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:19.687 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:21:19.688 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:21:20.288 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:20.288 Cleaning 00:21:20.288 Removing: /var/run/dpdk/spdk0/config 00:21:20.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:21:20.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:21:20.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:21:20.288 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:21:20.288 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:21:20.288 Removing: /var/run/dpdk/spdk0/hugepage_info 00:21:20.288 Removing: /var/run/dpdk/spdk1/config 00:21:20.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:21:20.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:21:20.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:21:20.288 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:21:20.288 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:21:20.288 Removing: /var/run/dpdk/spdk1/hugepage_info 00:21:20.288 Removing: /var/run/dpdk/spdk2/config 00:21:20.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:21:20.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:21:20.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:21:20.288 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:21:20.288 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:21:20.288 Removing: /var/run/dpdk/spdk2/hugepage_info 00:21:20.288 Removing: /var/run/dpdk/spdk3/config 00:21:20.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:21:20.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:21:20.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:21:20.288 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:21:20.288 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:21:20.288 Removing: /var/run/dpdk/spdk3/hugepage_info 00:21:20.288 Removing: /var/run/dpdk/spdk4/config 00:21:20.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:21:20.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:21:20.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:21:20.288 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:21:20.288 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:21:20.288 Removing: /var/run/dpdk/spdk4/hugepage_info 00:21:20.288 Removing: /dev/shm/nvmf_trace.0 00:21:20.288 Removing: /dev/shm/spdk_tgt_trace.pid56770 00:21:20.547 Removing: /var/run/dpdk/spdk0 00:21:20.547 Removing: /var/run/dpdk/spdk1 00:21:20.547 Removing: /var/run/dpdk/spdk2 00:21:20.547 Removing: /var/run/dpdk/spdk3 00:21:20.547 Removing: /var/run/dpdk/spdk4 00:21:20.547 Removing: /var/run/dpdk/spdk_pid56619 00:21:20.547 Removing: /var/run/dpdk/spdk_pid56770 00:21:20.547 Removing: /var/run/dpdk/spdk_pid56963 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57044 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57064 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57174 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57183 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57318 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57514 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57668 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57740 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57819 00:21:20.547 Removing: /var/run/dpdk/spdk_pid57918 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58003 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58036 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58066 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58141 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58222 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58655 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58694 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58738 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58746 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58808 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58816 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58878 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58894 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58939 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58957 00:21:20.547 Removing: /var/run/dpdk/spdk_pid58997 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59008 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59138 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59168 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59251 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59577 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59589 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59626 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59634 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59649 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59668 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59682 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59697 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59716 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59730 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59741 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59759 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59778 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59788 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59807 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59820 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59836 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59855 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59863 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59884 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59909 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59928 00:21:20.547 Removing: /var/run/dpdk/spdk_pid59952 00:21:20.547 Removing: /var/run/dpdk/spdk_pid60024 00:21:20.547 Removing: /var/run/dpdk/spdk_pid60047 00:21:20.547 Removing: /var/run/dpdk/spdk_pid60062 00:21:20.547 Removing: /var/run/dpdk/spdk_pid60085 00:21:20.547 Removing: /var/run/dpdk/spdk_pid60100 00:21:20.547 Removing: /var/run/dpdk/spdk_pid60102 00:21:20.547 Removing: /var/run/dpdk/spdk_pid60139 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60158 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60181 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60196 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60200 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60204 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60219 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60223 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60227 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60242 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60265 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60297 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60303 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60332 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60341 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60343 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60389 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60395 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60422 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60429 00:21:20.548 Removing: /var/run/dpdk/spdk_pid60431 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60444 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60446 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60454 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60462 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60464 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60546 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60588 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60697 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60725 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60770 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60785 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60801 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60821 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60853 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60868 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60942 00:21:20.806 Removing: /var/run/dpdk/spdk_pid60959 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61003 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61060 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61111 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61144 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61242 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61290 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61317 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61549 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61641 00:21:20.806 Removing: /var/run/dpdk/spdk_pid61664 00:21:20.807 Removing: /var/run/dpdk/spdk_pid61699 00:21:20.807 Removing: /var/run/dpdk/spdk_pid61727 00:21:20.807 Removing: /var/run/dpdk/spdk_pid61766 00:21:20.807 Removing: /var/run/dpdk/spdk_pid61794 00:21:20.807 Removing: /var/run/dpdk/spdk_pid61830 00:21:20.807 Removing: /var/run/dpdk/spdk_pid62215 00:21:20.807 Removing: /var/run/dpdk/spdk_pid62255 00:21:20.807 Removing: /var/run/dpdk/spdk_pid62588 00:21:20.807 Removing: /var/run/dpdk/spdk_pid63039 00:21:20.807 Removing: /var/run/dpdk/spdk_pid63304 00:21:20.807 Removing: /var/run/dpdk/spdk_pid64149 00:21:20.807 Removing: /var/run/dpdk/spdk_pid65054 00:21:20.807 Removing: /var/run/dpdk/spdk_pid65167 00:21:20.807 Removing: /var/run/dpdk/spdk_pid65240 00:21:20.807 Removing: /var/run/dpdk/spdk_pid66652 00:21:20.807 Removing: /var/run/dpdk/spdk_pid66961 00:21:20.807 Removing: /var/run/dpdk/spdk_pid70771 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71143 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71255 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71382 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71403 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71424 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71453 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71538 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71673 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71822 00:21:20.807 Removing: /var/run/dpdk/spdk_pid71896 00:21:20.807 Removing: /var/run/dpdk/spdk_pid72085 00:21:20.807 Removing: /var/run/dpdk/spdk_pid72153 00:21:20.807 Removing: /var/run/dpdk/spdk_pid72238 00:21:20.807 Removing: /var/run/dpdk/spdk_pid72596 00:21:20.807 Removing: /var/run/dpdk/spdk_pid73000 00:21:20.807 Removing: /var/run/dpdk/spdk_pid73001 00:21:20.807 Removing: /var/run/dpdk/spdk_pid73002 00:21:20.807 Removing: /var/run/dpdk/spdk_pid73262 00:21:20.807 Removing: /var/run/dpdk/spdk_pid73522 00:21:20.807 Removing: /var/run/dpdk/spdk_pid73899 00:21:20.807 Removing: /var/run/dpdk/spdk_pid73907 00:21:20.807 Removing: /var/run/dpdk/spdk_pid74224 00:21:20.807 Removing: /var/run/dpdk/spdk_pid74244 00:21:20.807 Removing: /var/run/dpdk/spdk_pid74258 00:21:20.807 Removing: /var/run/dpdk/spdk_pid74294 00:21:20.807 Removing: /var/run/dpdk/spdk_pid74300 00:21:20.807 Removing: /var/run/dpdk/spdk_pid74642 00:21:20.807 Removing: /var/run/dpdk/spdk_pid74691 00:21:20.807 Removing: /var/run/dpdk/spdk_pid75033 00:21:20.807 Removing: /var/run/dpdk/spdk_pid75227 00:21:20.807 Removing: /var/run/dpdk/spdk_pid75645 00:21:20.807 Removing: /var/run/dpdk/spdk_pid76187 00:21:20.807 Removing: /var/run/dpdk/spdk_pid77073 00:21:20.807 Removing: /var/run/dpdk/spdk_pid77713 00:21:20.807 Removing: /var/run/dpdk/spdk_pid77716 00:21:20.807 Removing: /var/run/dpdk/spdk_pid79743 00:21:20.807 Removing: /var/run/dpdk/spdk_pid79796 00:21:20.807 Removing: /var/run/dpdk/spdk_pid79843 00:21:20.807 Removing: /var/run/dpdk/spdk_pid79898 00:21:20.807 Removing: /var/run/dpdk/spdk_pid79997 00:21:20.807 Removing: /var/run/dpdk/spdk_pid80050 00:21:20.807 Removing: /var/run/dpdk/spdk_pid80098 00:21:20.807 Removing: /var/run/dpdk/spdk_pid80151 00:21:20.807 Removing: /var/run/dpdk/spdk_pid80509 00:21:20.807 Removing: /var/run/dpdk/spdk_pid81719 00:21:20.807 Removing: /var/run/dpdk/spdk_pid81858 00:21:20.807 Removing: /var/run/dpdk/spdk_pid82091 00:21:21.066 Removing: /var/run/dpdk/spdk_pid82687 00:21:21.066 Removing: /var/run/dpdk/spdk_pid82847 00:21:21.066 Removing: /var/run/dpdk/spdk_pid83005 00:21:21.066 Removing: /var/run/dpdk/spdk_pid83096 00:21:21.066 Removing: /var/run/dpdk/spdk_pid83261 00:21:21.066 Removing: /var/run/dpdk/spdk_pid83370 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84071 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84102 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84137 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84392 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84426 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84457 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84927 00:21:21.066 Removing: /var/run/dpdk/spdk_pid84931 00:21:21.066 Removing: /var/run/dpdk/spdk_pid85185 00:21:21.066 Removing: /var/run/dpdk/spdk_pid85308 00:21:21.066 Removing: /var/run/dpdk/spdk_pid85323 00:21:21.066 Clean 00:21:21.066 09:44:06 -- common/autotest_common.sh@1451 -- # return 0 00:21:21.066 09:44:06 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:21:21.066 09:44:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.066 09:44:06 -- common/autotest_common.sh@10 -- # set +x 00:21:21.066 09:44:06 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:21:21.066 09:44:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.066 09:44:06 -- common/autotest_common.sh@10 -- # set +x 00:21:21.066 09:44:06 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:21.066 09:44:06 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:21:21.066 09:44:06 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:21:21.066 09:44:06 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:21:21.066 09:44:06 -- spdk/autotest.sh@394 -- # hostname 00:21:21.066 09:44:06 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:21:21.325 geninfo: WARNING: invalid characters removed from testname! 00:21:47.867 09:44:32 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:51.158 09:44:36 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:53.692 09:44:39 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:56.256 09:44:42 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:59.541 09:44:44 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:02.074 09:44:47 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:04.608 09:44:50 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:22:04.608 09:44:50 -- spdk/autorun.sh@1 -- $ timing_finish 00:22:04.608 09:44:50 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:22:04.608 09:44:50 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:22:04.608 09:44:50 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:22:04.608 09:44:50 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:04.608 + [[ -n 5256 ]] 00:22:04.608 + sudo kill 5256 00:22:04.617 [Pipeline] } 00:22:04.633 [Pipeline] // timeout 00:22:04.638 [Pipeline] } 00:22:04.652 [Pipeline] // stage 00:22:04.657 [Pipeline] } 00:22:04.673 [Pipeline] // catchError 00:22:04.684 [Pipeline] stage 00:22:04.686 [Pipeline] { (Stop VM) 00:22:04.699 [Pipeline] sh 00:22:04.979 + vagrant halt 00:22:08.292 ==> default: Halting domain... 00:22:14.862 [Pipeline] sh 00:22:15.141 + vagrant destroy -f 00:22:19.329 ==> default: Removing domain... 00:22:19.341 [Pipeline] sh 00:22:19.621 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:22:19.629 [Pipeline] } 00:22:19.644 [Pipeline] // stage 00:22:19.649 [Pipeline] } 00:22:19.663 [Pipeline] // dir 00:22:19.668 [Pipeline] } 00:22:19.683 [Pipeline] // wrap 00:22:19.689 [Pipeline] } 00:22:19.701 [Pipeline] // catchError 00:22:19.710 [Pipeline] stage 00:22:19.712 [Pipeline] { (Epilogue) 00:22:19.725 [Pipeline] sh 00:22:20.006 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:26.584 [Pipeline] catchError 00:22:26.586 [Pipeline] { 00:22:26.600 [Pipeline] sh 00:22:26.881 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:26.881 Artifacts sizes are good 00:22:26.890 [Pipeline] } 00:22:26.905 [Pipeline] // catchError 00:22:26.917 [Pipeline] archiveArtifacts 00:22:26.923 Archiving artifacts 00:22:27.029 [Pipeline] cleanWs 00:22:27.041 [WS-CLEANUP] Deleting project workspace... 00:22:27.041 [WS-CLEANUP] Deferred wipeout is used... 00:22:27.047 [WS-CLEANUP] done 00:22:27.049 [Pipeline] } 00:22:27.064 [Pipeline] // stage 00:22:27.070 [Pipeline] } 00:22:27.083 [Pipeline] // node 00:22:27.089 [Pipeline] End of Pipeline 00:22:27.137 Finished: SUCCESS